Test Report: Docker_Linux_containerd_arm64 22158

                    
                      84cd1e71ac9e612e02e936645952571e7d114b51:2025-12-16:42799
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.07
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.01
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.26
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.36
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.33
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 732.85
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.34
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.73
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.02
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.37
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.66
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.38
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.12
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 126.42
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.25
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.25
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.28
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.29
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.55
358 TestKubernetesUpgrade 800.67
404 TestStartStop/group/no-preload/serial/FirstStart 512.54
437 TestStartStop/group/newest-cni/serial/FirstStart 502.47
438 TestStartStop/group/no-preload/serial/DeployApp 2.97
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 104.72
442 TestStartStop/group/no-preload/serial/SecondStart 370.33
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 105.1
445 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 541.79
448 TestStartStop/group/newest-cni/serial/SecondStart 374.91
452 TestStartStop/group/newest-cni/serial/Pause 9.28
459 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 268.05
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1216 02:43:51.135343 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:44:18.850734 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.139507 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.146097 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.157661 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.179226 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.220741 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.302229 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.463824 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.785517 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:52.427686 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:53.709383 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:56.270762 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:46:01.392642 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:46:11.634624 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:46:32.116053 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:47:13.078553 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:48:35.001603 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:48:51.139336 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.63202031s)

                                                
                                                
-- stdout --
	* [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Found network options:
	  - HTTP_PROXY=localhost:41589
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:41589 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001007216s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001643344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001643344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 6 (328.40053ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 02:50:02.792324 1842311 status.go:458] kubeconfig endpoint: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-853651 --kill=true                                                                                                                        │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ addons         │ functional-853651 addons list                                                                                                                           │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ addons         │ functional-853651 addons list -o json                                                                                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node-connect --url                                                                                                      │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-853651 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-853651 --alsologtostderr -v=1                                                                                          │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service list                                                                                                                          │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service list -o json                                                                                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service --namespace=default --https --url hello-node                                                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node --url --format={{.IP}}                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node --url                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format short --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format yaml --alsologtostderr                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ ssh            │ functional-853651 ssh pgrep buildkitd                                                                                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ image          │ functional-853651 image ls --format json --alsologtostderr                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format table --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls                                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ delete         │ -p functional-853651                                                                                                                                    │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:41:41
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:41:41.863776 1836804 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:41:41.863878 1836804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:41:41.863882 1836804 out.go:374] Setting ErrFile to fd 2...
	I1216 02:41:41.863892 1836804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:41:41.864246 1836804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:41:41.864730 1836804 out.go:368] Setting JSON to false
	I1216 02:41:41.865580 1836804 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30246,"bootTime":1765822656,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:41:41.865669 1836804 start.go:143] virtualization:  
	I1216 02:41:41.870933 1836804 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:41:41.875573 1836804 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:41:41.875738 1836804 notify.go:221] Checking for updates...
	I1216 02:41:41.882559 1836804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:41:41.885814 1836804 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:41:41.889016 1836804 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:41:41.892027 1836804 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:41:41.895080 1836804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:41:41.898309 1836804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:41:41.929920 1836804 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:41:41.930030 1836804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:41:42.007624 1836804 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 02:41:41.995689401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:41:42.007729 1836804 docker.go:319] overlay module found
	I1216 02:41:42.015251 1836804 out.go:179] * Using the docker driver based on user configuration
	I1216 02:41:42.020637 1836804 start.go:309] selected driver: docker
	I1216 02:41:42.020680 1836804 start.go:927] validating driver "docker" against <nil>
	I1216 02:41:42.020694 1836804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:41:42.021642 1836804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:41:42.112719 1836804 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 02:41:42.086328211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:41:42.112960 1836804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:41:42.113241 1836804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:41:42.116434 1836804 out.go:179] * Using Docker driver with root privileges
	I1216 02:41:42.119601 1836804 cni.go:84] Creating CNI manager for ""
	I1216 02:41:42.119675 1836804 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:41:42.119687 1836804 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 02:41:42.119781 1836804 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:41:42.123394 1836804 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:41:42.128174 1836804 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:41:42.131522 1836804 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:41:42.134670 1836804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:41:42.134730 1836804 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:41:42.134741 1836804 cache.go:65] Caching tarball of preloaded images
	I1216 02:41:42.134766 1836804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:41:42.134852 1836804 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:41:42.134861 1836804 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:41:42.135286 1836804 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:41:42.135320 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json: {Name:mk1bd7f7413370999f48167ac4c3dbb5d6b00856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:41:42.159600 1836804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:41:42.159614 1836804 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:41:42.159636 1836804 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:41:42.159674 1836804 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:41:42.159813 1836804 start.go:364] duration metric: took 123.984µs to acquireMachinesLock for "functional-389759"
	I1216 02:41:42.159842 1836804 start.go:93] Provisioning new machine with config: &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 02:41:42.159922 1836804 start.go:125] createHost starting for "" (driver="docker")
	I1216 02:41:42.163617 1836804 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1216 02:41:42.164034 1836804 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:41589 to docker env.
	I1216 02:41:42.164064 1836804 start.go:159] libmachine.API.Create for "functional-389759" (driver="docker")
	I1216 02:41:42.164089 1836804 client.go:173] LocalClient.Create starting
	I1216 02:41:42.164165 1836804 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 02:41:42.164219 1836804 main.go:143] libmachine: Decoding PEM data...
	I1216 02:41:42.164235 1836804 main.go:143] libmachine: Parsing certificate...
	I1216 02:41:42.164293 1836804 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 02:41:42.164315 1836804 main.go:143] libmachine: Decoding PEM data...
	I1216 02:41:42.164327 1836804 main.go:143] libmachine: Parsing certificate...
	I1216 02:41:42.164825 1836804 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 02:41:42.190398 1836804 cli_runner.go:211] docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 02:41:42.190480 1836804 network_create.go:284] running [docker network inspect functional-389759] to gather additional debugging logs...
	I1216 02:41:42.190497 1836804 cli_runner.go:164] Run: docker network inspect functional-389759
	W1216 02:41:42.216864 1836804 cli_runner.go:211] docker network inspect functional-389759 returned with exit code 1
	I1216 02:41:42.216889 1836804 network_create.go:287] error running [docker network inspect functional-389759]: docker network inspect functional-389759: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-389759 not found
	I1216 02:41:42.216907 1836804 network_create.go:289] output of [docker network inspect functional-389759]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-389759 not found
	
	** /stderr **
	I1216 02:41:42.217010 1836804 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:41:42.259175 1836804 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018c1660}
	I1216 02:41:42.259210 1836804 network_create.go:124] attempt to create docker network functional-389759 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 02:41:42.259272 1836804 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-389759 functional-389759
	I1216 02:41:42.329083 1836804 network_create.go:108] docker network functional-389759 192.168.49.0/24 created
	I1216 02:41:42.329108 1836804 kic.go:121] calculated static IP "192.168.49.2" for the "functional-389759" container
	I1216 02:41:42.329190 1836804 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 02:41:42.347605 1836804 cli_runner.go:164] Run: docker volume create functional-389759 --label name.minikube.sigs.k8s.io=functional-389759 --label created_by.minikube.sigs.k8s.io=true
	I1216 02:41:42.366361 1836804 oci.go:103] Successfully created a docker volume functional-389759
	I1216 02:41:42.366448 1836804 cli_runner.go:164] Run: docker run --rm --name functional-389759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-389759 --entrypoint /usr/bin/test -v functional-389759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 02:41:42.894590 1836804 oci.go:107] Successfully prepared a docker volume functional-389759
	I1216 02:41:42.894666 1836804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:41:42.894675 1836804 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 02:41:42.894754 1836804 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-389759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 02:41:46.770975 1836804 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-389759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.876187627s)
	I1216 02:41:46.770997 1836804 kic.go:203] duration metric: took 3.87631852s to extract preloaded images to volume ...
	W1216 02:41:46.771179 1836804 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 02:41:46.771280 1836804 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 02:41:46.840011 1836804 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-389759 --name functional-389759 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-389759 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-389759 --network functional-389759 --ip 192.168.49.2 --volume functional-389759:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 02:41:47.118824 1836804 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Running}}
	I1216 02:41:47.140960 1836804 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:41:47.165207 1836804 cli_runner.go:164] Run: docker exec functional-389759 stat /var/lib/dpkg/alternatives/iptables
	I1216 02:41:47.219862 1836804 oci.go:144] the created container "functional-389759" has a running status.
	I1216 02:41:47.219881 1836804 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa...
	I1216 02:41:47.371815 1836804 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 02:41:47.403286 1836804 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:41:47.435796 1836804 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 02:41:47.435807 1836804 kic_runner.go:114] Args: [docker exec --privileged functional-389759 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 02:41:47.486509 1836804 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:41:47.514796 1836804 machine.go:94] provisionDockerMachine start ...
	I1216 02:41:47.514885 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:41:47.545129 1836804 main.go:143] libmachine: Using SSH client type: native
	I1216 02:41:47.545484 1836804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:41:47.545491 1836804 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:41:47.546330 1836804 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 02:41:50.682534 1836804 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:41:50.682550 1836804 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:41:50.682612 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:41:50.700243 1836804 main.go:143] libmachine: Using SSH client type: native
	I1216 02:41:50.700547 1836804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:41:50.700556 1836804 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:41:50.844487 1836804 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:41:50.844577 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:41:50.863332 1836804 main.go:143] libmachine: Using SSH client type: native
	I1216 02:41:50.863640 1836804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:41:50.863655 1836804 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:41:50.995262 1836804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:41:50.995280 1836804 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:41:50.995306 1836804 ubuntu.go:190] setting up certificates
	I1216 02:41:50.995314 1836804 provision.go:84] configureAuth start
	I1216 02:41:50.995381 1836804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:41:51.014044 1836804 provision.go:143] copyHostCerts
	I1216 02:41:51.014121 1836804 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:41:51.014129 1836804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:41:51.014234 1836804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:41:51.014334 1836804 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:41:51.014344 1836804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:41:51.014373 1836804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:41:51.014460 1836804 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:41:51.014463 1836804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:41:51.014489 1836804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:41:51.014549 1836804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:41:51.199556 1836804 provision.go:177] copyRemoteCerts
	I1216 02:41:51.199620 1836804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:41:51.199658 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:41:51.218305 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:41:51.315016 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:41:51.332716 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:41:51.350176 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:41:51.367601 1836804 provision.go:87] duration metric: took 372.263595ms to configureAuth
	I1216 02:41:51.367619 1836804 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:41:51.367812 1836804 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:41:51.367819 1836804 machine.go:97] duration metric: took 3.853013625s to provisionDockerMachine
	I1216 02:41:51.367825 1836804 client.go:176] duration metric: took 9.203730986s to LocalClient.Create
	I1216 02:41:51.367850 1836804 start.go:167] duration metric: took 9.203786132s to libmachine.API.Create "functional-389759"
	I1216 02:41:51.367856 1836804 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:41:51.367865 1836804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:41:51.367913 1836804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:41:51.367950 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:41:51.384750 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:41:51.483063 1836804 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:41:51.486278 1836804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:41:51.486296 1836804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:41:51.486306 1836804 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:41:51.486362 1836804 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:41:51.486447 1836804 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:41:51.486519 1836804 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:41:51.486564 1836804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:41:51.494035 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:41:51.511101 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:41:51.528790 1836804 start.go:296] duration metric: took 160.921152ms for postStartSetup
	I1216 02:41:51.529182 1836804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:41:51.545938 1836804 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:41:51.546198 1836804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:41:51.546243 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:41:51.564062 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:41:51.660130 1836804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:41:51.665460 1836804 start.go:128] duration metric: took 9.505524518s to createHost
	I1216 02:41:51.665475 1836804 start.go:83] releasing machines lock for "functional-389759", held for 9.505654214s
	I1216 02:41:51.665567 1836804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:41:51.688175 1836804 out.go:179] * Found network options:
	I1216 02:41:51.691642 1836804 out.go:179]   - HTTP_PROXY=localhost:41589
	W1216 02:41:51.694158 1836804 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1216 02:41:51.696868 1836804 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1216 02:41:51.699712 1836804 ssh_runner.go:195] Run: cat /version.json
	I1216 02:41:51.699757 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:41:51.699789 1836804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:41:51.699840 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:41:51.719512 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:41:51.725781 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:41:51.810634 1836804 ssh_runner.go:195] Run: systemctl --version
	I1216 02:41:51.903750 1836804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:41:51.908055 1836804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:41:51.908116 1836804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:41:51.934468 1836804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 02:41:51.934483 1836804 start.go:496] detecting cgroup driver to use...
	I1216 02:41:51.934528 1836804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:41:51.934584 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:41:51.950645 1836804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:41:51.964217 1836804 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:41:51.964281 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:41:51.981749 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:41:52.012353 1836804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:41:52.134449 1836804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:41:52.251164 1836804 docker.go:234] disabling docker service ...
	I1216 02:41:52.251219 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:41:52.274340 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:41:52.287362 1836804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:41:52.417205 1836804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:41:52.528413 1836804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:41:52.541544 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:41:52.559036 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:41:52.569170 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:41:52.578061 1836804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:41:52.578120 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:41:52.587182 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:41:52.596380 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:41:52.605092 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:41:52.614156 1836804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:41:52.622599 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:41:52.631794 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:41:52.641242 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:41:52.650022 1836804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:41:52.657994 1836804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:41:52.665666 1836804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:41:52.776744 1836804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:41:52.911490 1836804 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:41:52.911568 1836804 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:41:52.915572 1836804 start.go:564] Will wait 60s for crictl version
	I1216 02:41:52.915629 1836804 ssh_runner.go:195] Run: which crictl
	I1216 02:41:52.919425 1836804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:41:52.943918 1836804 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:41:52.943990 1836804 ssh_runner.go:195] Run: containerd --version
	I1216 02:41:52.965713 1836804 ssh_runner.go:195] Run: containerd --version
	I1216 02:41:52.992536 1836804 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:41:52.995355 1836804 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:41:53.013451 1836804 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:41:53.017661 1836804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:41:53.028221 1836804 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:41:53.028325 1836804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:41:53.028400 1836804 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:41:53.059210 1836804 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:41:53.059222 1836804 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:41:53.059290 1836804 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:41:53.084175 1836804 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:41:53.084188 1836804 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:41:53.084194 1836804 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:41:53.084284 1836804 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:41:53.084349 1836804 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:41:53.111406 1836804 cni.go:84] Creating CNI manager for ""
	I1216 02:41:53.111416 1836804 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:41:53.111437 1836804 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:41:53.111465 1836804 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:41:53.111571 1836804 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:41:53.111648 1836804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:41:53.119796 1836804 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:41:53.119859 1836804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:41:53.127945 1836804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:41:53.141403 1836804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:41:53.154427 1836804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 02:41:53.167712 1836804 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:41:53.171523 1836804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:41:53.181750 1836804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:41:53.292618 1836804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:41:53.309917 1836804 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:41:53.309928 1836804 certs.go:195] generating shared ca certs ...
	I1216 02:41:53.309942 1836804 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:41:53.310112 1836804 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:41:53.310150 1836804 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:41:53.310156 1836804 certs.go:257] generating profile certs ...
	I1216 02:41:53.310215 1836804 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:41:53.310224 1836804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt with IP's: []
	I1216 02:41:53.385071 1836804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt ...
	I1216 02:41:53.385087 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: {Name:mkf5b2afa4ff09d31576d440e6ade56ed6d1cf8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:41:53.385286 1836804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key ...
	I1216 02:41:53.385293 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key: {Name:mk4ae7b1c21386e7b6110d43ab0dc54099526423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:41:53.385368 1836804 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:41:53.385379 1836804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt.a3e65e84 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 02:41:53.459433 1836804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt.a3e65e84 ...
	I1216 02:41:53.459461 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt.a3e65e84: {Name:mk7156a057fd9faaf2272c7a798fc39457ca3ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:41:53.460009 1836804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84 ...
	I1216 02:41:53.460019 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84: {Name:mkddc74ebf3e5006e4368dcb8c617f202b5feb32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:41:53.460100 1836804 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt.a3e65e84 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt
	I1216 02:41:53.460189 1836804 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key
	I1216 02:41:53.460243 1836804 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:41:53.460256 1836804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt with IP's: []
	I1216 02:41:53.922797 1836804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt ...
	I1216 02:41:53.922814 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt: {Name:mkeba81ebdc3f059e5b4e04c8bed296e88d0b249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:41:53.923020 1836804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key ...
	I1216 02:41:53.923030 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key: {Name:mk3fab92e43e0e11ba066d8d273ef0f6e251a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:41:53.923245 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:41:53.923292 1836804 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:41:53.923301 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:41:53.923327 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:41:53.923351 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:41:53.923374 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:41:53.923417 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:41:53.924017 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:41:53.942778 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:41:53.962164 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:41:53.979895 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:41:53.997572 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:41:54.021468 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:41:54.040597 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:41:54.059184 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:41:54.077415 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:41:54.095933 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:41:54.115132 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:41:54.133518 1836804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:41:54.146928 1836804 ssh_runner.go:195] Run: openssl version
	I1216 02:41:54.153599 1836804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:41:54.161652 1836804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:41:54.169616 1836804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:41:54.173413 1836804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:41:54.173472 1836804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:41:54.214943 1836804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:41:54.222600 1836804 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 02:41:54.230149 1836804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:41:54.237714 1836804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:41:54.245518 1836804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:41:54.249850 1836804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:41:54.249908 1836804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:41:54.291579 1836804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:41:54.298986 1836804 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 02:41:54.306619 1836804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:41:54.314174 1836804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:41:54.321951 1836804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:41:54.325995 1836804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:41:54.326055 1836804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:41:54.366934 1836804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:41:54.374558 1836804 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 02:41:54.382069 1836804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:41:54.385804 1836804 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 02:41:54.385850 1836804 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:41:54.385921 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:41:54.385992 1836804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:41:54.413389 1836804 cri.go:89] found id: ""
	I1216 02:41:54.413457 1836804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:41:54.421422 1836804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:41:54.429246 1836804 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 02:41:54.429307 1836804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:41:54.437247 1836804 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 02:41:54.437267 1836804 kubeadm.go:158] found existing configuration files:
	
	I1216 02:41:54.437319 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 02:41:54.445101 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 02:41:54.445170 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 02:41:54.452507 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 02:41:54.460108 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 02:41:54.460176 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:41:54.467661 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 02:41:54.475279 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 02:41:54.475348 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:41:54.483185 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 02:41:54.490947 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 02:41:54.491005 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:41:54.498694 1836804 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 02:41:54.546753 1836804 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 02:41:54.547107 1836804 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 02:41:54.626707 1836804 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 02:41:54.626771 1836804 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 02:41:54.626806 1836804 kubeadm.go:319] OS: Linux
	I1216 02:41:54.626850 1836804 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 02:41:54.626897 1836804 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 02:41:54.626943 1836804 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 02:41:54.626990 1836804 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 02:41:54.627037 1836804 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 02:41:54.627111 1836804 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 02:41:54.627157 1836804 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 02:41:54.627204 1836804 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 02:41:54.627249 1836804 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 02:41:54.695797 1836804 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 02:41:54.695913 1836804 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 02:41:54.696023 1836804 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 02:41:54.703436 1836804 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 02:41:54.709920 1836804 out.go:252]   - Generating certificates and keys ...
	I1216 02:41:54.710028 1836804 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 02:41:54.710105 1836804 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 02:41:55.046737 1836804 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 02:41:55.606371 1836804 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 02:41:55.836961 1836804 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 02:41:55.940463 1836804 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 02:41:56.102129 1836804 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 02:41:56.102472 1836804 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 02:41:56.457447 1836804 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 02:41:56.457942 1836804 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 02:41:56.680049 1836804 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 02:41:56.879750 1836804 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 02:41:57.007794 1836804 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 02:41:57.007866 1836804 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 02:41:57.619803 1836804 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 02:41:57.852516 1836804 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 02:41:58.237294 1836804 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 02:41:58.493291 1836804 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 02:41:59.042206 1836804 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 02:41:59.042963 1836804 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 02:41:59.046273 1836804 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 02:41:59.050046 1836804 out.go:252]   - Booting up control plane ...
	I1216 02:41:59.050147 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 02:41:59.050228 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 02:41:59.050909 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 02:41:59.068105 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 02:41:59.068401 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 02:41:59.075715 1836804 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 02:41:59.075997 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 02:41:59.076196 1836804 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 02:41:59.225479 1836804 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 02:41:59.225591 1836804 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 02:45:59.226442 1836804 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001007216s
	I1216 02:45:59.226462 1836804 kubeadm.go:319] 
	I1216 02:45:59.226518 1836804 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 02:45:59.226555 1836804 kubeadm.go:319] 	- The kubelet is not running
	I1216 02:45:59.226660 1836804 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 02:45:59.226664 1836804 kubeadm.go:319] 
	I1216 02:45:59.226767 1836804 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 02:45:59.226798 1836804 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 02:45:59.226853 1836804 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 02:45:59.226857 1836804 kubeadm.go:319] 
	I1216 02:45:59.232079 1836804 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 02:45:59.232553 1836804 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 02:45:59.232660 1836804 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 02:45:59.232938 1836804 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 02:45:59.232942 1836804 kubeadm.go:319] 
	W1216 02:45:59.233127 1836804 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001007216s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 02:45:59.233221 1836804 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 02:45:59.233354 1836804 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 02:45:59.643275 1836804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:45:59.656370 1836804 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 02:45:59.656424 1836804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:45:59.664108 1836804 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 02:45:59.664116 1836804 kubeadm.go:158] found existing configuration files:
	
	I1216 02:45:59.664176 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 02:45:59.672324 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 02:45:59.672388 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 02:45:59.680013 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 02:45:59.687704 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 02:45:59.687759 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:45:59.695199 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 02:45:59.702823 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 02:45:59.702886 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:45:59.710304 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 02:45:59.718282 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 02:45:59.718340 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:45:59.726059 1836804 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 02:45:59.767401 1836804 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 02:45:59.767596 1836804 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 02:45:59.835846 1836804 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 02:45:59.835908 1836804 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 02:45:59.835940 1836804 kubeadm.go:319] OS: Linux
	I1216 02:45:59.835982 1836804 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 02:45:59.836026 1836804 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 02:45:59.836069 1836804 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 02:45:59.836113 1836804 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 02:45:59.836157 1836804 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 02:45:59.836200 1836804 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 02:45:59.836242 1836804 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 02:45:59.836286 1836804 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 02:45:59.836328 1836804 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 02:45:59.903760 1836804 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 02:45:59.903859 1836804 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 02:45:59.903943 1836804 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 02:45:59.915476 1836804 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 02:45:59.920820 1836804 out.go:252]   - Generating certificates and keys ...
	I1216 02:45:59.920913 1836804 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 02:45:59.920981 1836804 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 02:45:59.921058 1836804 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 02:45:59.921121 1836804 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 02:45:59.921192 1836804 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 02:45:59.921242 1836804 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 02:45:59.921306 1836804 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 02:45:59.921369 1836804 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 02:45:59.921449 1836804 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 02:45:59.921522 1836804 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 02:45:59.921563 1836804 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 02:45:59.921616 1836804 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 02:46:00.156452 1836804 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 02:46:00.507591 1836804 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 02:46:01.084996 1836804 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 02:46:01.620158 1836804 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 02:46:01.821348 1836804 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 02:46:01.821979 1836804 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 02:46:01.825522 1836804 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 02:46:01.828839 1836804 out.go:252]   - Booting up control plane ...
	I1216 02:46:01.828936 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 02:46:01.829015 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 02:46:01.829439 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 02:46:01.849660 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 02:46:01.849940 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 02:46:01.857335 1836804 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 02:46:01.857593 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 02:46:01.857769 1836804 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 02:46:02.003938 1836804 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 02:46:02.004086 1836804 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 02:50:01.999350 1836804 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001643344s
	I1216 02:50:01.999373 1836804 kubeadm.go:319] 
	I1216 02:50:01.999432 1836804 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 02:50:01.999470 1836804 kubeadm.go:319] 	- The kubelet is not running
	I1216 02:50:02.001787 1836804 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 02:50:02.001815 1836804 kubeadm.go:319] 
	I1216 02:50:02.002008 1836804 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 02:50:02.002064 1836804 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 02:50:02.002446 1836804 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 02:50:02.002454 1836804 kubeadm.go:319] 
	I1216 02:50:02.011021 1836804 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 02:50:02.011535 1836804 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 02:50:02.011685 1836804 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 02:50:02.011946 1836804 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 02:50:02.011955 1836804 kubeadm.go:319] 
	I1216 02:50:02.012023 1836804 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 02:50:02.012085 1836804 kubeadm.go:403] duration metric: took 8m7.626242328s to StartCluster
	I1216 02:50:02.012122 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:50:02.012189 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:50:02.037959 1836804 cri.go:89] found id: ""
	I1216 02:50:02.037984 1836804 logs.go:282] 0 containers: []
	W1216 02:50:02.037991 1836804 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:50:02.037997 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:50:02.038059 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:50:02.070297 1836804 cri.go:89] found id: ""
	I1216 02:50:02.070312 1836804 logs.go:282] 0 containers: []
	W1216 02:50:02.070319 1836804 logs.go:284] No container was found matching "etcd"
	I1216 02:50:02.070324 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:50:02.070390 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:50:02.097654 1836804 cri.go:89] found id: ""
	I1216 02:50:02.097669 1836804 logs.go:282] 0 containers: []
	W1216 02:50:02.097676 1836804 logs.go:284] No container was found matching "coredns"
	I1216 02:50:02.097681 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:50:02.097740 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:50:02.126757 1836804 cri.go:89] found id: ""
	I1216 02:50:02.126772 1836804 logs.go:282] 0 containers: []
	W1216 02:50:02.126779 1836804 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:50:02.126784 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:50:02.126844 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:50:02.162393 1836804 cri.go:89] found id: ""
	I1216 02:50:02.162407 1836804 logs.go:282] 0 containers: []
	W1216 02:50:02.162415 1836804 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:50:02.162420 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:50:02.162483 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:50:02.190116 1836804 cri.go:89] found id: ""
	I1216 02:50:02.190132 1836804 logs.go:282] 0 containers: []
	W1216 02:50:02.190140 1836804 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:50:02.190146 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:50:02.190220 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:50:02.216152 1836804 cri.go:89] found id: ""
	I1216 02:50:02.216179 1836804 logs.go:282] 0 containers: []
	W1216 02:50:02.216188 1836804 logs.go:284] No container was found matching "kindnet"
	I1216 02:50:02.216196 1836804 logs.go:123] Gathering logs for kubelet ...
	I1216 02:50:02.216206 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:50:02.274351 1836804 logs.go:123] Gathering logs for dmesg ...
	I1216 02:50:02.274370 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:50:02.292154 1836804 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:50:02.292171 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:50:02.358339 1836804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:50:02.349506    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:02.350079    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:02.351854    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:02.352317    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:02.353783    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:50:02.349506    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:02.350079    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:02.351854    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:02.352317    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:02.353783    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:50:02.358352 1836804 logs.go:123] Gathering logs for containerd ...
	I1216 02:50:02.358362 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:50:02.396543 1836804 logs.go:123] Gathering logs for container status ...
	I1216 02:50:02.396563 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 02:50:02.426020 1836804 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001643344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 02:50:02.426059 1836804 out.go:285] * 
	W1216 02:50:02.426140 1836804 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001643344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 02:50:02.426158 1836804 out.go:285] * 
	W1216 02:50:02.428541 1836804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:50:02.434930 1836804 out.go:203] 
	W1216 02:50:02.437781 1836804 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001643344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 02:50:02.437829 1836804 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 02:50:02.437850 1836804 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 02:50:02.441002 1836804 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851200569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851269277Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851378460Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851449491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851508763Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851568634Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851629023Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851691364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851758382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851845067Z" level=info msg="Connect containerd service"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.852205272Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.852902281Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.865826242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.865895336Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.865946986Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.866002484Z" level=info msg="Start recovering state"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908194480Z" level=info msg="Start event monitor"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908246515Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908257592Z" level=info msg="Start streaming server"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908266790Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908275134Z" level=info msg="runtime interface starting up..."
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908281148Z" level=info msg="starting plugins..."
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908294161Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908571980Z" level=info msg="containerd successfully booted in 0.086424s"
	Dec 16 02:41:52 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:50:03.407333    4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:03.408202    4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:03.409830    4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:03.410189    4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:50:03.411726    4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:50:03 up  8:32,  0 user,  load average: 0.04, 0.51, 1.12
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 02:50:00 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 16 02:50:01 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:50:01 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:50:01 functional-389759 kubelet[4722]: E1216 02:50:01.236735    4722 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 02:50:01 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:50:01 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:50:01 functional-389759 kubelet[4728]: E1216 02:50:01.987782    4728 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:50:02 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 02:50:02 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:50:02 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:50:02 functional-389759 kubelet[4825]: E1216 02:50:02.761767    4825 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:50:02 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:50:02 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:50:03 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 02:50:03 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:50:03 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:50:03 functional-389759 kubelet[4924]: E1216 02:50:03.492495    4924 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:50:03 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:50:03 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 6 (334.379902ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 02:50:03.878381 1842531 status.go:458] kubeconfig endpoint: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1216 02:50:03.894891 1798370 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389759 --alsologtostderr -v=8
E1216 02:50:51.130725 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:51:18.843314 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:53:51.133471 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:55:14.212075 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:55:51.131296 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389759 --alsologtostderr -v=8: exit status 80 (6m5.071017613s)

                                                
                                                
-- stdout --
	* [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:50:03.940449 1842604 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:50:03.940640 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.940666 1842604 out.go:374] Setting ErrFile to fd 2...
	I1216 02:50:03.940685 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.941001 1842604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:50:03.941424 1842604 out.go:368] Setting JSON to false
	I1216 02:50:03.942302 1842604 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30748,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:50:03.942395 1842604 start.go:143] virtualization:  
	I1216 02:50:03.948050 1842604 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:50:03.951289 1842604 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:50:03.951381 1842604 notify.go:221] Checking for updates...
	I1216 02:50:03.954734 1842604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:50:03.957600 1842604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:03.960611 1842604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:50:03.963508 1842604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:50:03.966329 1842604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:50:03.969672 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:03.969806 1842604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:50:04.007031 1842604 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:50:04.007241 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.073702 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.062313817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.073812 1842604 docker.go:319] overlay module found
	I1216 02:50:04.077006 1842604 out.go:179] * Using the docker driver based on existing profile
	I1216 02:50:04.079902 1842604 start.go:309] selected driver: docker
	I1216 02:50:04.079932 1842604 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.080054 1842604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:50:04.080179 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.136011 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.126842192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.136427 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:04.136482 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:04.136533 1842604 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.139723 1842604 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:50:04.142545 1842604 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:50:04.145565 1842604 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:50:04.148399 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:04.148453 1842604 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:50:04.148469 1842604 cache.go:65] Caching tarball of preloaded images
	I1216 02:50:04.148474 1842604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:50:04.148567 1842604 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:50:04.148577 1842604 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:50:04.148682 1842604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:50:04.168498 1842604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:50:04.168522 1842604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:50:04.168544 1842604 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:50:04.168575 1842604 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:50:04.168643 1842604 start.go:364] duration metric: took 46.539µs to acquireMachinesLock for "functional-389759"
	I1216 02:50:04.168667 1842604 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:50:04.168673 1842604 fix.go:54] fixHost starting: 
	I1216 02:50:04.168962 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:04.192862 1842604 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:50:04.192891 1842604 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:50:04.196202 1842604 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:50:04.196246 1842604 machine.go:94] provisionDockerMachine start ...
	I1216 02:50:04.196329 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.213973 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.214316 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.214325 1842604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:50:04.350600 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.350628 1842604 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:50:04.350691 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.368974 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.369299 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.369316 1842604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:50:04.513062 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.513215 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.531552 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.531870 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.531893 1842604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:50:04.663498 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:50:04.663573 1842604 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:50:04.663612 1842604 ubuntu.go:190] setting up certificates
	I1216 02:50:04.663658 1842604 provision.go:84] configureAuth start
	I1216 02:50:04.663756 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:04.681830 1842604 provision.go:143] copyHostCerts
	I1216 02:50:04.681871 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681914 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:50:04.681921 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681996 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:50:04.682080 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682098 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:50:04.682107 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682134 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:50:04.682171 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682188 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:50:04.682192 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682218 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:50:04.682263 1842604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:50:04.918732 1842604 provision.go:177] copyRemoteCerts
	I1216 02:50:04.918803 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:50:04.918909 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.945401 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.043237 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 02:50:05.043301 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:50:05.061641 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 02:50:05.061702 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:50:05.079841 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 02:50:05.079956 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:50:05.097722 1842604 provision.go:87] duration metric: took 434.019439ms to configureAuth
	I1216 02:50:05.097754 1842604 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:50:05.097953 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:05.097967 1842604 machine.go:97] duration metric: took 901.714132ms to provisionDockerMachine
	I1216 02:50:05.097975 1842604 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:50:05.097987 1842604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:50:05.098051 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:50:05.098102 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.115383 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.211319 1842604 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:50:05.214768 1842604 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 02:50:05.214793 1842604 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 02:50:05.214797 1842604 command_runner.go:130] > VERSION_ID="12"
	I1216 02:50:05.214802 1842604 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 02:50:05.214807 1842604 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 02:50:05.214810 1842604 command_runner.go:130] > ID=debian
	I1216 02:50:05.214815 1842604 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 02:50:05.214820 1842604 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 02:50:05.214826 1842604 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 02:50:05.214871 1842604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:50:05.214894 1842604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:50:05.214911 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:50:05.214973 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:50:05.215088 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:50:05.215101 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /etc/ssl/certs/17983702.pem
	I1216 02:50:05.215203 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:50:05.215211 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> /etc/test/nested/copy/1798370/hosts
	I1216 02:50:05.215287 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:50:05.223273 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:05.241790 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:50:05.259718 1842604 start.go:296] duration metric: took 161.727689ms for postStartSetup
	I1216 02:50:05.259801 1842604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:50:05.259846 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.277760 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.371870 1842604 command_runner.go:130] > 18%
	I1216 02:50:05.372496 1842604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:50:05.377201 1842604 command_runner.go:130] > 161G
	I1216 02:50:05.377708 1842604 fix.go:56] duration metric: took 1.209030723s for fixHost
	I1216 02:50:05.377728 1842604 start.go:83] releasing machines lock for "functional-389759", held for 1.209073027s
	I1216 02:50:05.377811 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:05.395427 1842604 ssh_runner.go:195] Run: cat /version.json
	I1216 02:50:05.395497 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.395795 1842604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:50:05.395856 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.414621 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.417076 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.510754 1842604 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 02:50:05.510902 1842604 ssh_runner.go:195] Run: systemctl --version
	I1216 02:50:05.609923 1842604 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 02:50:05.612841 1842604 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 02:50:05.612896 1842604 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 02:50:05.613034 1842604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 02:50:05.617736 1842604 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 02:50:05.617774 1842604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:50:05.617838 1842604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:50:05.626000 1842604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:50:05.626028 1842604 start.go:496] detecting cgroup driver to use...
	I1216 02:50:05.626059 1842604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:50:05.626109 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:50:05.644077 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:50:05.659636 1842604 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:50:05.659709 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:50:05.676805 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:50:05.692573 1842604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:50:05.816755 1842604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:50:05.944883 1842604 docker.go:234] disabling docker service ...
	I1216 02:50:05.944952 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:50:05.960111 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:50:05.973273 1842604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:50:06.102700 1842604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:50:06.226099 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:50:06.239914 1842604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:50:06.254235 1842604 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 02:50:06.255720 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:50:06.265881 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:50:06.274988 1842604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:50:06.275099 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:50:06.284319 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.293767 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:50:06.302914 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.312051 1842604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:50:06.320364 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:50:06.329464 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:50:06.338574 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:50:06.347623 1842604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:50:06.354520 1842604 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 02:50:06.355609 1842604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:50:06.363468 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:06.501216 1842604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:50:06.641570 1842604 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:50:06.641646 1842604 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:50:06.645599 1842604 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1216 02:50:06.645623 1842604 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 02:50:06.645629 1842604 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1216 02:50:06.645636 1842604 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:06.645642 1842604 command_runner.go:130] > Access: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645647 1842604 command_runner.go:130] > Modify: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645652 1842604 command_runner.go:130] > Change: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645656 1842604 command_runner.go:130] >  Birth: -
	I1216 02:50:06.645685 1842604 start.go:564] Will wait 60s for crictl version
	I1216 02:50:06.645740 1842604 ssh_runner.go:195] Run: which crictl
	I1216 02:50:06.649139 1842604 command_runner.go:130] > /usr/local/bin/crictl
	I1216 02:50:06.649430 1842604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:50:06.676623 1842604 command_runner.go:130] > Version:  0.1.0
	I1216 02:50:06.676645 1842604 command_runner.go:130] > RuntimeName:  containerd
	I1216 02:50:06.676661 1842604 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1216 02:50:06.676671 1842604 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 02:50:06.676683 1842604 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:50:06.676740 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.701508 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.703452 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.721412 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.729453 1842604 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:50:06.732626 1842604 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:50:06.754519 1842604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:50:06.758684 1842604 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 02:50:06.758798 1842604 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:50:06.758921 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:06.758993 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.783030 1842604 command_runner.go:130] > {
	I1216 02:50:06.783088 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.783093 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783103 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.783109 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783114 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.783117 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783121 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783130 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.783133 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783138 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.783142 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783146 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783149 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783152 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783160 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.783163 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783169 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.783172 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783176 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783185 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.783188 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783192 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.783196 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783200 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783204 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783207 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783214 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.783224 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783229 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.783232 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783243 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783251 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.783254 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783258 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.783262 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.783266 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783269 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783272 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783278 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.783282 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783287 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.783290 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783294 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783305 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.783308 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783312 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.783317 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783321 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783324 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783328 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783332 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783337 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783340 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783347 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.783351 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.783359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783363 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783370 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.783374 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783381 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.783384 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783392 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783395 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783399 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783403 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783406 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783409 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783415 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.783419 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783424 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.783427 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783431 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783439 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.783442 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783446 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.783450 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783454 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783457 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783461 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783464 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783467 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783470 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783476 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.783480 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783485 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.783488 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783492 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783499 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.783503 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783506 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.783510 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783514 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783520 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783524 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783530 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.783534 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783540 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.783543 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783546 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783554 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.783557 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.783568 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783572 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783575 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783579 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783582 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783585 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783588 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783595 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.783599 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783604 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.783608 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783611 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783619 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.783622 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783625 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.783629 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783633 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.783636 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783639 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783643 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.783646 1842604 command_runner.go:130] >     }
	I1216 02:50:06.783648 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.783651 1842604 command_runner.go:130] > }
	I1216 02:50:06.785559 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.785577 1842604 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:50:06.785637 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.809062 1842604 command_runner.go:130] > {
	I1216 02:50:06.809080 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.809085 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809094 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.809099 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809105 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.809108 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809112 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809121 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.809125 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809129 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.809133 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809137 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809140 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809143 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809153 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.809157 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809162 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.809166 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809170 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809178 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.809181 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809186 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.809189 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809193 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809196 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809199 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809207 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.809211 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809216 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.809219 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809226 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809235 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.809241 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809246 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.809250 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.809254 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809257 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809260 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809267 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.809270 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809276 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.809279 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809283 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809291 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.809294 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809298 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.809303 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809307 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809311 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809315 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809318 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809322 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809325 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809332 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.809335 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809341 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.809344 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809348 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.809359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809364 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.809367 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809379 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809382 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809386 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809393 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809396 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809399 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809406 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.809410 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809416 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.809419 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809423 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809432 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.809435 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809439 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.809443 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809447 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809450 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809453 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809461 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809464 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809467 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809475 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.809478 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809483 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.809486 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809490 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809498 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.809501 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809505 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.809509 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809513 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809516 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809519 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809526 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.809530 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809535 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.809541 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809545 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809553 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.809556 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.809564 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809568 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809571 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809575 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809579 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809582 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809585 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809591 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.809595 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809599 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.809602 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809606 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809614 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.809616 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809620 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.809624 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809627 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.809632 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809635 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809639 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.809642 1842604 command_runner.go:130] >     }
	I1216 02:50:06.809645 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.809648 1842604 command_runner.go:130] > }
	I1216 02:50:06.811271 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.811300 1842604 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:50:06.811308 1842604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:50:06.811452 1842604 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:50:06.811544 1842604 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:50:06.842167 1842604 command_runner.go:130] > {
	I1216 02:50:06.842188 1842604 command_runner.go:130] >   "cniconfig": {
	I1216 02:50:06.842194 1842604 command_runner.go:130] >     "Networks": [
	I1216 02:50:06.842198 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842203 1842604 command_runner.go:130] >         "Config": {
	I1216 02:50:06.842208 1842604 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1216 02:50:06.842213 1842604 command_runner.go:130] >           "Name": "cni-loopback",
	I1216 02:50:06.842217 1842604 command_runner.go:130] >           "Plugins": [
	I1216 02:50:06.842220 1842604 command_runner.go:130] >             {
	I1216 02:50:06.842224 1842604 command_runner.go:130] >               "Network": {
	I1216 02:50:06.842229 1842604 command_runner.go:130] >                 "ipam": {},
	I1216 02:50:06.842234 1842604 command_runner.go:130] >                 "type": "loopback"
	I1216 02:50:06.842238 1842604 command_runner.go:130] >               },
	I1216 02:50:06.842243 1842604 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1216 02:50:06.842246 1842604 command_runner.go:130] >             }
	I1216 02:50:06.842249 1842604 command_runner.go:130] >           ],
	I1216 02:50:06.842259 1842604 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1216 02:50:06.842263 1842604 command_runner.go:130] >         },
	I1216 02:50:06.842268 1842604 command_runner.go:130] >         "IFName": "lo"
	I1216 02:50:06.842271 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842275 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842279 1842604 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1216 02:50:06.842283 1842604 command_runner.go:130] >     "PluginDirs": [
	I1216 02:50:06.842287 1842604 command_runner.go:130] >       "/opt/cni/bin"
	I1216 02:50:06.842291 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842298 1842604 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1216 02:50:06.842301 1842604 command_runner.go:130] >     "Prefix": "eth"
	I1216 02:50:06.842304 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842308 1842604 command_runner.go:130] >   "config": {
	I1216 02:50:06.842312 1842604 command_runner.go:130] >     "cdiSpecDirs": [
	I1216 02:50:06.842315 1842604 command_runner.go:130] >       "/etc/cdi",
	I1216 02:50:06.842320 1842604 command_runner.go:130] >       "/var/run/cdi"
	I1216 02:50:06.842328 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842331 1842604 command_runner.go:130] >     "cni": {
	I1216 02:50:06.842335 1842604 command_runner.go:130] >       "binDir": "",
	I1216 02:50:06.842338 1842604 command_runner.go:130] >       "binDirs": [
	I1216 02:50:06.842342 1842604 command_runner.go:130] >         "/opt/cni/bin"
	I1216 02:50:06.842345 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.842349 1842604 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1216 02:50:06.842352 1842604 command_runner.go:130] >       "confTemplate": "",
	I1216 02:50:06.842356 1842604 command_runner.go:130] >       "ipPref": "",
	I1216 02:50:06.842359 1842604 command_runner.go:130] >       "maxConfNum": 1,
	I1216 02:50:06.842364 1842604 command_runner.go:130] >       "setupSerially": false,
	I1216 02:50:06.842368 1842604 command_runner.go:130] >       "useInternalLoopback": false
	I1216 02:50:06.842371 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842378 1842604 command_runner.go:130] >     "containerd": {
	I1216 02:50:06.842382 1842604 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1216 02:50:06.842387 1842604 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1216 02:50:06.842392 1842604 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1216 02:50:06.842396 1842604 command_runner.go:130] >       "runtimes": {
	I1216 02:50:06.842399 1842604 command_runner.go:130] >         "runc": {
	I1216 02:50:06.842404 1842604 command_runner.go:130] >           "ContainerAnnotations": null,
	I1216 02:50:06.842415 1842604 command_runner.go:130] >           "PodAnnotations": null,
	I1216 02:50:06.842421 1842604 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1216 02:50:06.842425 1842604 command_runner.go:130] >           "cgroupWritable": false,
	I1216 02:50:06.842429 1842604 command_runner.go:130] >           "cniConfDir": "",
	I1216 02:50:06.842433 1842604 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1216 02:50:06.842436 1842604 command_runner.go:130] >           "io_type": "",
	I1216 02:50:06.842439 1842604 command_runner.go:130] >           "options": {
	I1216 02:50:06.842443 1842604 command_runner.go:130] >             "BinaryName": "",
	I1216 02:50:06.842448 1842604 command_runner.go:130] >             "CriuImagePath": "",
	I1216 02:50:06.842451 1842604 command_runner.go:130] >             "CriuWorkPath": "",
	I1216 02:50:06.842455 1842604 command_runner.go:130] >             "IoGid": 0,
	I1216 02:50:06.842458 1842604 command_runner.go:130] >             "IoUid": 0,
	I1216 02:50:06.842462 1842604 command_runner.go:130] >             "NoNewKeyring": false,
	I1216 02:50:06.842469 1842604 command_runner.go:130] >             "Root": "",
	I1216 02:50:06.842473 1842604 command_runner.go:130] >             "ShimCgroup": "",
	I1216 02:50:06.842480 1842604 command_runner.go:130] >             "SystemdCgroup": false
	I1216 02:50:06.842483 1842604 command_runner.go:130] >           },
	I1216 02:50:06.842488 1842604 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1216 02:50:06.842494 1842604 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1216 02:50:06.842499 1842604 command_runner.go:130] >           "runtimePath": "",
	I1216 02:50:06.842504 1842604 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1216 02:50:06.842508 1842604 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1216 02:50:06.842512 1842604 command_runner.go:130] >           "snapshotter": ""
	I1216 02:50:06.842515 1842604 command_runner.go:130] >         }
	I1216 02:50:06.842518 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842521 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842530 1842604 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1216 02:50:06.842535 1842604 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1216 02:50:06.842541 1842604 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1216 02:50:06.842546 1842604 command_runner.go:130] >     "disableApparmor": false,
	I1216 02:50:06.842550 1842604 command_runner.go:130] >     "disableHugetlbController": true,
	I1216 02:50:06.842554 1842604 command_runner.go:130] >     "disableProcMount": false,
	I1216 02:50:06.842558 1842604 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1216 02:50:06.842562 1842604 command_runner.go:130] >     "enableCDI": true,
	I1216 02:50:06.842565 1842604 command_runner.go:130] >     "enableSelinux": false,
	I1216 02:50:06.842569 1842604 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1216 02:50:06.842573 1842604 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1216 02:50:06.842578 1842604 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1216 02:50:06.842582 1842604 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1216 02:50:06.842586 1842604 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1216 02:50:06.842590 1842604 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1216 02:50:06.842595 1842604 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1216 02:50:06.842600 1842604 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842604 1842604 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1216 02:50:06.842610 1842604 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842614 1842604 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1216 02:50:06.842622 1842604 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1216 02:50:06.842625 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842628 1842604 command_runner.go:130] >   "features": {
	I1216 02:50:06.842632 1842604 command_runner.go:130] >     "supplemental_groups_policy": true
	I1216 02:50:06.842635 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842639 1842604 command_runner.go:130] >   "golang": "go1.24.9",
	I1216 02:50:06.842649 1842604 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842658 1842604 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842662 1842604 command_runner.go:130] >   "runtimeHandlers": [
	I1216 02:50:06.842665 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842668 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842672 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842676 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842679 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842682 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842685 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842688 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842693 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842697 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842700 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842703 1842604 command_runner.go:130] >       "name": "runc"
	I1216 02:50:06.842706 1842604 command_runner.go:130] >     }
	I1216 02:50:06.842709 1842604 command_runner.go:130] >   ],
	I1216 02:50:06.842713 1842604 command_runner.go:130] >   "status": {
	I1216 02:50:06.842716 1842604 command_runner.go:130] >     "conditions": [
	I1216 02:50:06.842719 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842723 1842604 command_runner.go:130] >         "message": "",
	I1216 02:50:06.842730 1842604 command_runner.go:130] >         "reason": "",
	I1216 02:50:06.842734 1842604 command_runner.go:130] >         "status": true,
	I1216 02:50:06.842739 1842604 command_runner.go:130] >         "type": "RuntimeReady"
	I1216 02:50:06.842742 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842745 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842756 1842604 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1216 02:50:06.842764 1842604 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1216 02:50:06.842775 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842779 1842604 command_runner.go:130] >         "type": "NetworkReady"
	I1216 02:50:06.842782 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842785 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842816 1842604 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1216 02:50:06.842831 1842604 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1216 02:50:06.842837 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842848 1842604 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1216 02:50:06.842852 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842855 1842604 command_runner.go:130] >     ]
	I1216 02:50:06.842857 1842604 command_runner.go:130] >   }
	I1216 02:50:06.842860 1842604 command_runner.go:130] > }
	I1216 02:50:06.845895 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:06.845921 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:06.845936 1842604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:50:06.845966 1842604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:50:06.846165 1842604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:50:06.846270 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:50:06.854737 1842604 command_runner.go:130] > kubeadm
	I1216 02:50:06.854757 1842604 command_runner.go:130] > kubectl
	I1216 02:50:06.854762 1842604 command_runner.go:130] > kubelet
	I1216 02:50:06.854790 1842604 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:50:06.854884 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:50:06.863474 1842604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:50:06.877235 1842604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:50:06.893176 1842604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 02:50:06.907542 1842604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:50:06.911554 1842604 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 02:50:06.912008 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.032285 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:07.187841 1842604 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:50:07.187907 1842604 certs.go:195] generating shared ca certs ...
	I1216 02:50:07.187938 1842604 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.188113 1842604 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:50:07.188262 1842604 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:50:07.188293 1842604 certs.go:257] generating profile certs ...
	I1216 02:50:07.188479 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:50:07.188626 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:50:07.188704 1842604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:50:07.188746 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 02:50:07.188833 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 02:50:07.188865 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 02:50:07.188913 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 02:50:07.188955 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 02:50:07.188991 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 02:50:07.189039 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 02:50:07.189094 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 02:50:07.189217 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:50:07.189294 1842604 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:50:07.189332 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:50:07.189413 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:50:07.189488 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:50:07.189568 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:50:07.189665 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:07.189733 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.189792 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem -> /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.189829 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.192734 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:50:07.215749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:50:07.236395 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:50:07.256110 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:50:07.276540 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:50:07.296274 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:50:07.314749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:50:07.333206 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:50:07.351818 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:50:07.370275 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:50:07.390851 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:50:07.409219 1842604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:50:07.421911 1842604 ssh_runner.go:195] Run: openssl version
	I1216 02:50:07.427966 1842604 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 02:50:07.428408 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.436062 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:50:07.443738 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447498 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447742 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447801 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.490768 1842604 command_runner.go:130] > b5213941
	I1216 02:50:07.491273 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:50:07.498894 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.506703 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:50:07.514440 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518338 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518429 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518508 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.559640 1842604 command_runner.go:130] > 51391683
	I1216 02:50:07.560095 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:50:07.567522 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.574982 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:50:07.582626 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586721 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586817 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586878 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.628240 1842604 command_runner.go:130] > 3ec20f2e
	I1216 02:50:07.628688 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:50:07.636367 1842604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640270 1842604 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640300 1842604 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 02:50:07.640307 1842604 command_runner.go:130] > Device: 259,1	Inode: 2346079     Links: 1
	I1216 02:50:07.640313 1842604 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:07.640320 1842604 command_runner.go:130] > Access: 2025-12-16 02:45:59.904024015 +0000
	I1216 02:50:07.640326 1842604 command_runner.go:130] > Modify: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640331 1842604 command_runner.go:130] > Change: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640338 1842604 command_runner.go:130] >  Birth: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640415 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:50:07.685787 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.686316 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:50:07.726862 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.727358 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:50:07.769278 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.769775 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:50:07.810792 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.811300 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:50:07.852245 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.852345 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:50:07.894213 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.894706 1842604 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:07.894832 1842604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:50:07.894910 1842604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:50:07.922900 1842604 cri.go:89] found id: ""
	I1216 02:50:07.922983 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:50:07.930226 1842604 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 02:50:07.930256 1842604 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 02:50:07.930263 1842604 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 02:50:07.931439 1842604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:50:07.931499 1842604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:50:07.931562 1842604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:50:07.943740 1842604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:50:07.944155 1842604 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.944257 1842604 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389759" cluster setting kubeconfig missing "functional-389759" context setting]
	I1216 02:50:07.944564 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.945009 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.945157 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:07.945886 1842604 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 02:50:07.945970 1842604 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 02:50:07.945985 1842604 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 02:50:07.945994 1842604 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 02:50:07.946007 1842604 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 02:50:07.946011 1842604 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 02:50:07.946339 1842604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:50:07.958263 1842604 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 02:50:07.958306 1842604 kubeadm.go:602] duration metric: took 26.787333ms to restartPrimaryControlPlane
	I1216 02:50:07.958316 1842604 kubeadm.go:403] duration metric: took 63.631777ms to StartCluster
	I1216 02:50:07.958333 1842604 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.958427 1842604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.959238 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.959525 1842604 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 02:50:07.959950 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:07.960006 1842604 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 02:50:07.960112 1842604 addons.go:70] Setting storage-provisioner=true in profile "functional-389759"
	I1216 02:50:07.960129 1842604 addons.go:239] Setting addon storage-provisioner=true in "functional-389759"
	I1216 02:50:07.960152 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:07.960945 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.961166 1842604 addons.go:70] Setting default-storageclass=true in profile "functional-389759"
	I1216 02:50:07.961188 1842604 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389759"
	I1216 02:50:07.961453 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.966091 1842604 out.go:179] * Verifying Kubernetes components...
	I1216 02:50:07.968861 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.999405 1842604 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:50:08.003951 1842604 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.003988 1842604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:50:08.004070 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.016743 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:08.016935 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:08.017229 1842604 addons.go:239] Setting addon default-storageclass=true in "functional-389759"
	I1216 02:50:08.017278 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:08.017759 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:08.056545 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.063547 1842604 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.063573 1842604 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:50:08.063643 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.096801 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.182820 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:08.204300 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.216429 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.938921 1842604 node_ready.go:35] waiting up to 6m0s for node "functional-389759" to be "Ready" ...
	I1216 02:50:08.939066 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:08.939127 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939127 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1216 02:50:08.939303 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939364 1842604 retry.go:31] will retry after 371.599151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939463 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:08.939655 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939681 1842604 retry.go:31] will retry after 208.586178ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.149421 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.213240 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.213284 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.213304 1842604 retry.go:31] will retry after 201.914515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.311585 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.373333 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.373376 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.373396 1842604 retry.go:31] will retry after 439.688248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.415509 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.483422 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.483469 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.483489 1842604 retry.go:31] will retry after 841.778226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.814006 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.876109 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.880285 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.880327 1842604 retry.go:31] will retry after 574.892877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.939502 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.939583 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.939923 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.325447 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:10.394946 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.394995 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.395014 1842604 retry.go:31] will retry after 1.198470662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.439106 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.439176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.439428 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.455825 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:10.523765 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.523815 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.523848 1842604 retry.go:31] will retry after 636.325982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.939367 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:10.939781 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:11.161191 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:11.242833 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.242908 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.242943 1842604 retry.go:31] will retry after 1.140424726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.439654 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:11.594053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:11.649408 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.653163 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.653194 1842604 retry.go:31] will retry after 1.344955883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.939594 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.939687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.940009 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.383614 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:12.439264 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.440165 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:12.443835 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.443867 1842604 retry.go:31] will retry after 2.819298169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.939234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.939324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.999127 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:13.066096 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:13.066142 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.066177 1842604 retry.go:31] will retry after 2.29209329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.439591 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.439676 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.440017 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:13.440078 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:13.939859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.939946 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.940333 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.439599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:15.264053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:15.323662 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.327080 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.327109 1842604 retry.go:31] will retry after 3.65241611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.359324 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:15.421588 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.421635 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.421654 1842604 retry.go:31] will retry after 1.62104706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.439778 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.439879 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.440170 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:15.440216 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:15.940008 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.940410 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.440078 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.440155 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.440450 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:17.043912 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:17.099707 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:17.103362 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.103395 1842604 retry.go:31] will retry after 4.481188348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.439835 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.439929 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.440261 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:17.440327 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:17.940004 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.940083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.940382 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.439649 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.939696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.980018 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:19.042087 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:19.045748 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.045786 1842604 retry.go:31] will retry after 3.780614615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:19.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.939337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.939666 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:19.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:20.439426 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.439516 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.439851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:20.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.939502 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.439268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.585043 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:21.648279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:21.648322 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.648342 1842604 retry.go:31] will retry after 5.326379112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.939713 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.940115 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:21.940177 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:22.439859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.439927 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.440196 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:22.826669 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:22.887724 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:22.891256 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.891291 1842604 retry.go:31] will retry after 7.007720529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.939466 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.939552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.939870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.439633 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.439715 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.440036 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.939677 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.939748 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.940008 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:24.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.440005 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.440343 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:24.440400 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:24.939690 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.939766 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.940068 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.439712 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.439799 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.440085 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.939947 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.940024 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.940358 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.439107 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.439185 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.939570 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:26.939627 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:26.975786 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:27.047539 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:27.047579 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.047598 1842604 retry.go:31] will retry after 10.416340882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.439244 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.439321 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:27.939345 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.939450 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.939785 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.439255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.439518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.939274 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.939371 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.939720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:28.939777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:29.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.899356 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:29.940020 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.940094 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.940346 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.975996 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:29.976895 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:29.976922 1842604 retry.go:31] will retry after 13.637319362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:30.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.439575 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:30.939293 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.939381 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:31.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:31.439575 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:31.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.439784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:33.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.439356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:33.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:33.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.439634 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.439714 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.439961 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.939579 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.939653 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:35.439846 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.439925 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.440292 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:35.440352 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:35.940006 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.940080 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.940335 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.440174 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.440258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.440580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.939312 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.939727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.439556 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.464839 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:37.535691 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:37.535727 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.535748 1842604 retry.go:31] will retry after 13.417840341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.939229 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:37.939658 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:38.439518 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.439602 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.439942 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:38.939665 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.939784 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.940059 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.440088 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.440162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.440456 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.939587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:40.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.439491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:40.439540 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:40.939209 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.939552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:42.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.439616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:42.439677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:42.939334 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.615150 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:43.680878 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:43.680928 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.680947 1842604 retry.go:31] will retry after 17.388789533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.939409 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:44.939567 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:45.439260 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.439687 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:45.939401 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.939486 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.939829 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:46.939686 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:47.439343 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.439416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:47.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.439361 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.439738 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:49.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:49.439525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:49.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.439605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.939264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.954020 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:51.020279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:51.020323 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.020343 1842604 retry.go:31] will retry after 13.418822402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.440005 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.440079 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.440420 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:51.440473 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:51.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.939608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.439380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:53.939608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:54.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.439246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:54.939220 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.439239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:55.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:56.439315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:56.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.939434 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.939515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:57.939910 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:58.439684 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.439750 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.440021 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:58.939803 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.939878 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.940159 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.440068 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.440142 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.440488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:00.439702 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:00.939370 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.939452 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.939786 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.070030 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:01.132180 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:01.132233 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.132254 1842604 retry.go:31] will retry after 31.549707812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.439619 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.439687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.439937 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.939692 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.939769 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.940100 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:02.439919 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.439992 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.440273 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:02.440317 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:02.939603 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.939682 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.939996 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.439752 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.439849 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.440174 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.939981 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.940061 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.940401 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.439336 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:04.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.517286 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:04.517332 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.517352 1842604 retry.go:31] will retry after 44.886251271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.939909 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.939982 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.940274 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:04.940323 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:05.440102 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.440174 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.440508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:05.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.939577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.439220 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.939309 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.939386 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:07.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:07.439570 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:07.939211 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.939296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.939674 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.439772 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.439857 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.440214 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.939551 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.939621 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.939875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:09.439806 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.439883 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.440225 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:09.440285 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:09.939882 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.939963 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.940293 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.439792 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.439859 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.440124 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.939880 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.939951 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.940239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:11.439938 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.440017 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.440352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:11.440408 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:11.939683 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.939756 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.940083 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.439886 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.439960 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.440334 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.939160 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.939264 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.939339 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:13.939773 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:14.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.439283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.939497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:16.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:16.439517 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:16.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.439317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.439673 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.939571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:18.439237 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.439312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:18.439701 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:18.939233 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.939653 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.939246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.939513 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:20.939554 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:21.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:21.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.439160 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.939658 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:22.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:23.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:23.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:25.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.439524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:25.439565 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:25.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.439242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.939226 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.939651 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:27.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:27.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:27.939251 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.939341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.939686 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.439565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.439293 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.939142 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.939475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:29.939523 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:30.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.439638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:30.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.439467 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:31.939621 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:32.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:32.683088 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:32.746941 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:32.746981 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.747001 1842604 retry.go:31] will retry after 33.271174209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.939435 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.939505 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.439398 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.439739 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.939479 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.939568 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.939898 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:33.939952 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:34.439808 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.439880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.440131 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:34.939954 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.940033 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.940352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.439094 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.439177 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.439525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:36.439299 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.439392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.439763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:36.439816 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:36.939492 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.939567 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.939939 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.439667 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.439746 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.440054 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.939849 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.939922 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.940282 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.439087 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.439168 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.439498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:38.939564 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:39.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:39.939340 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.439520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.939185 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:40.939662 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:41.439347 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.439746 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:41.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.939550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.939307 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.939384 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:42.939786 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:43.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:43.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.939249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.939486 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:45.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.439295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.439645 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:45.439707 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:45.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:47.439327 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.439404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.439724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:47.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:47.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.439273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:49.404519 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:49.440015 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.440083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.440326 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:49.440365 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:49.475362 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475396 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475480 1842604 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:51:49.939258 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.939331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.439273 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.439350 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.939510 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.939742 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:51.939799 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:52.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.439527 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:52.939172 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.439284 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.439357 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.439683 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:54.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.439675 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:54.439728 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:54.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.939602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.439136 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.439455 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.939488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:56.939531 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:57.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:57.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.439522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.939289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.939635 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:58.939699 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:59.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.439571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:59.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.939462 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.439330 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.439711 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:01.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:01.439533 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:01.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.439326 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.439405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.439723 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.939547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:03.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:03.439656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:03.939333 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.939982 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:04.439870 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.439950 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.441104 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 02:52:04.939902 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.939976 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.940342 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:05.439974 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.440051 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.440366 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:05.440422 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:05.939070 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.939144 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.939427 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.018765 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:52:06.088979 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089023 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089108 1842604 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:52:06.092116 1842604 out.go:179] * Enabled addons: 
	I1216 02:52:06.094111 1842604 addons.go:530] duration metric: took 1m58.134103468s for enable addons: enabled=[]
	I1216 02:52:06.439418 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.439511 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.439875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.939605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.439870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:07.939631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:08.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.439668 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:08.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:09.939712 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:10.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:10.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.439325 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:12.439170 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:12.439614 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:12.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:14.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.439618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:14.439675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:14.939160 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.939166 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.939565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:16.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:16.439751 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:16.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.939474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:18.439316 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.439414 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.439810 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:18.439875 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:18.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.439217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.939594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:20.939572 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:21.439266 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.439341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:21.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.939580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.439532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.939572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:22.939619 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:23.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:23.939140 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.439648 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.939341 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.939745 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:24.939803 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:25.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:25.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:27.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.439562 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:27.439608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:27.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:29.439248 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.439328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.439689 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:29.439742 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:29.939139 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.939549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.939397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.939726 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.439487 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.939215 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.939632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:31.939687 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:32.439388 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.439773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:32.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.939546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.939270 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.939356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:33.939787 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:34.439540 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.439618 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:34.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:36.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.439305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:36.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:36.939319 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.939408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.939776 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.439547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:38.439553 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.439631 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.439986 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:38.440048 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.939530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.939328 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:40.939636 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:41.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:41.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.439574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.939650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:42.939708 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:43.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:43.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.939585 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.439396 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.439479 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.439801 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:45.439245 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.439326 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.439625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:45.439674 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:45.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.939135 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:47.939618 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:48.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.439619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:48.939338 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.939771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.439504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:49.939679 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:50.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:50.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.439257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.939737 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:51.939789 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:52.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:52.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.939262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.439314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.939491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:54.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.439271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:54.439659 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:54.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.939623 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:56.439309 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.439391 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:56.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:56.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.939545 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:58.939617 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:59.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:59.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:00.939649 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:01.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:01.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.439351 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.439735 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.939471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:03.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:03.439650 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:03.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.939395 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.939207 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.939477 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:05.939529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:06.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.439240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:06.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.939724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.439508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.939335 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.939685 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:07.939739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:08.439494 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.439903 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:08.939686 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.939764 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.940063 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.440032 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.440108 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.440422 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:10.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:10.439589 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:10.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.439598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:12.439168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:12.439639 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:12.939298 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.939374 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:14.439387 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.439472 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.439812 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:14.439867 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:14.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.939501 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:16.939656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:17.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.439397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.439744 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:17.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.439237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.439576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.939765 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:18.939822 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:19.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:19.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.439182 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:21.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.439583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:21.439640 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:21.939297 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.939731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.439207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.939299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:23.439346 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.439421 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.439771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:23.439824 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:23.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.939483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.439299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.439632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.939319 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.439591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:25.939638 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:26.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:26.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.939259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:28.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:28.439529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:28.939176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.439313 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.439393 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.439725 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:30.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:30.439655 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:30.939320 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.939418 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.439475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.939534 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:32.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.439620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:32.439676 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:32.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.439245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:34.939677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:35.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:35.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.939263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.439228 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.939277 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.939377 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.939732 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:36.939794 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:37.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:37.939236 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.939314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.439417 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.439490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.439842 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:39.439409 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.439482 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.439830 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:39.439883 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:39.939560 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.939640 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.939973 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.439727 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.439800 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.440066 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.939896 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.939970 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.940284 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:41.440124 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.440201 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.440497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:41.440543 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:41.943162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.943240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.943561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.439267 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.439351 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.939415 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.939490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.939836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.439215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.439499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:43.939661 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:44.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:44.939247 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.939347 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.939662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.439436 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.439789 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.939376 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.939453 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.939756 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:45.939804 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:46.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.439210 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:46.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.939525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.439589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.939329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.939704 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:48.439754 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.439848 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.440188 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:48.440247 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:48.939992 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.940067 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.940363 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.439133 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.439200 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.939646 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.439384 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.439476 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:50.939521 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:51.939222 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.439241 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.439313 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.939395 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.939474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.939857 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:52.939915 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:53.439594 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.439672 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.440023 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:53.939720 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.940050 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.440097 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.440176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.440526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:55.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.439505 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:55.439556 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:55.939269 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.939344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.939702 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.439437 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.439519 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.439881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.939498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:57.439223 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.439303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:57.439739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:57.939423 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.939507 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.939912 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.439813 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.439885 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.440183 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.940058 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.940134 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.940478 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.439567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.939485 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:59.939525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:00.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.439472 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.939561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:01.939606 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:02.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:02.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.444773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1216 02:54:04.444839 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:04.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.939528 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.439610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:06.939675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:07.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.439235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:07.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.939834 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.439782 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.439866 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.440217 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.939622 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.939691 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.939951 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:08.939990 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:09.439901 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.439984 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.440340 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:09.940011 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.940412 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.440017 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.440085 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.440369 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.939087 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.939163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:11.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.439309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.439891 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:11.439947 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:11.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.939684 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.939946 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.439716 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.439790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.440122 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.939802 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.939880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.940191 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:13.439954 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.440025 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.440286 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:13.440326 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:13.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.939162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.439289 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.439670 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.939235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.439219 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.939184 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:15.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:16.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:16.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.939531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.939240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:18.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.439286 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.439581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:18.439631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:18.939322 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.939736 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.439493 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:20.439464 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.439552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.439931 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:20.439986 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:20.939676 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.939743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.940000 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.439750 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.439826 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.440155 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.939838 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.939912 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.940244 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:22.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.440003 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.440320 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:22.440370 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:22.940106 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.940183 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.940523 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.939141 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.439339 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.439743 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.939456 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.939535 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.939838 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:24.939885 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:25.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.439298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:25.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.939504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:27.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:27.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:27.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.439179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.439531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.939224 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.939682 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:29.439497 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:29.439924 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:29.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.939230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.939551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.939364 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.939698 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:31.939756 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:32.439434 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.439515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.439885 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:32.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.439177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.439263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.939412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:33.939818 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:34.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.439568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:34.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.939595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:36.439210 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.439644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:36.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:36.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.939300 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.939615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:38.439345 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.439428 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.439811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:38.439872 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:38.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.939515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.439608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.939380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.939747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.439481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:40.939620 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:41.439300 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.439721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:41.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.439561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.939360 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.939717 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:42.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:43.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.439482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:43.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.939158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:45.439421 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.439558 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.440302 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:45.440450 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:45.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.939642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.439549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.940079 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.940162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.940421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:47.940463 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.439275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:48.939330 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.939700 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.439553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:50.439334 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:50.439774 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:50.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.439563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.439158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.439503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:52.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:53.439238 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.439324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:53.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.939529 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.439450 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.439530 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.439887 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.939607 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.939685 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:54.940065 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:55.439530 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.439603 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.439907 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.439329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.439662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.939519 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:57.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.439641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:57.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:57.939371 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.939446 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.939781 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:59.939684 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:00.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.439758 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:00.939212 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.939641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:01.939706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:02.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.439808 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:02.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.439530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:04.439579 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:04.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.939672 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.439280 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.439751 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.939466 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:06.439161 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:06.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:06.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.939644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.439279 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.439353 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.439621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.939358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.939442 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.939811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:08.439810 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.439889 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.440239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:08.440291 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:08.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.939681 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.939929 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.439800 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.439881 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.440208 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.939526 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.939604 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.939943 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.439719 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.439792 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.440067 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.939812 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.939897 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.940243 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:10.940297 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:11.440018 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.440100 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.440421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:11.939675 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.939759 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.940020 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.439848 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.439919 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.440237 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.940077 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.940153 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.940509 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:12.940583 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:13.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:13.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.939610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.439597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.939566 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:15.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.439663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:15.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:15.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.939245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.939569 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.439533 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:17.439324 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:17.439815 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:17.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.939524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.439468 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:19.939616 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:20.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.439548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:20.939137 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.939207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.939465 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.439657 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:21.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:22.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.439535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.439305 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.439389 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.439720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.939542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:24.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:24.439651 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:24.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.939598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.439463 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.439089 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.439163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:26.939524 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:27.439214 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.439661 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:27.939373 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.939451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.939805 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.939221 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.939614 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:28.939667 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:29.439402 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.439474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.439787 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:29.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.439272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.939323 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.939401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.939721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:30.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:31.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:31.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.939328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.939663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.439376 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.439451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.439836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.939913 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.939990 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.940359 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:32.940407 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:33.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.439212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:33.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.439249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.439588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:35.439211 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:35.439710 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:35.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.939499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:37.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:38.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.439750 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:38.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.439559 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:40.439611 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:40.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.939558 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.439280 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:42.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:42.439668 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:42.939362 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.939437 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.939760 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.439213 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:44.439226 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:44.439691 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:44.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.439572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.939300 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.939383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.939237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:46.939647 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:47.439293 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.439366 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.439696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:47.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.939541 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.939238 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.939317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:49.439174 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.439543 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:49.439584 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:49.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.439290 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.943201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.943274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.943612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:51.439328 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:51.439790 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:51.939213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.939297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.939626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.439526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.939601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.439284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.939494 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:53.939534 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:54.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.439304 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:54.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.939622 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.439163 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.439516 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.939231 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.939312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.939665 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:55.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:56.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.439461 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.439770 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:56.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.439287 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.439372 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.939469 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.939545 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.939881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:57.939934 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:58.439663 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.439743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.440003 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:58.939830 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.939903 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.940228 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.439135 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.939583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:00.439256 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.439337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.439709 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:00.439775 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:00.939210 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.439602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:02.939560 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:03.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.439540 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:03.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.939867 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:04.939916 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:05.439564 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.439639 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.439983 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:05.939766 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.939842 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.940108 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.439869 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.439941 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.440295 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.940114 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.940198 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.940608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:06.940665 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:07.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:07.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.939618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.439510 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:08.439620 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:08.440275 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.939119 1842604 node_ready.go:38] duration metric: took 6m0.000151723s for node "functional-389759" to be "Ready" ...
	I1216 02:56:08.942443 1842604 out.go:203] 
	W1216 02:56:08.945313 1842604 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 02:56:08.945510 1842604 out.go:285] * 
	* 
	W1216 02:56:08.947818 1842604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:56:08.950773 1842604 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-389759 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.638962899s for "functional-389759" cluster.
I1216 02:56:09.533901 1798370 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (393.864714ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-853651 addons list                                                                                                                           │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ addons         │ functional-853651 addons list -o json                                                                                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node-connect --url                                                                                                      │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-853651 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-853651 --alsologtostderr -v=1                                                                                          │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service list                                                                                                                          │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service list -o json                                                                                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service --namespace=default --https --url hello-node                                                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node --url --format={{.IP}}                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node --url                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format short --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format yaml --alsologtostderr                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ ssh            │ functional-853651 ssh pgrep buildkitd                                                                                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ image          │ functional-853651 image ls --format json --alsologtostderr                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format table --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls                                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ delete         │ -p functional-853651                                                                                                                                    │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-389759 --alsologtostderr -v=8                                                                                                             │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:50 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:50:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:50:03.940449 1842604 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:50:03.940640 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.940666 1842604 out.go:374] Setting ErrFile to fd 2...
	I1216 02:50:03.940685 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.941001 1842604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:50:03.941424 1842604 out.go:368] Setting JSON to false
	I1216 02:50:03.942302 1842604 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30748,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:50:03.942395 1842604 start.go:143] virtualization:  
	I1216 02:50:03.948050 1842604 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:50:03.951289 1842604 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:50:03.951381 1842604 notify.go:221] Checking for updates...
	I1216 02:50:03.954734 1842604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:50:03.957600 1842604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:03.960611 1842604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:50:03.963508 1842604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:50:03.966329 1842604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:50:03.969672 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:03.969806 1842604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:50:04.007031 1842604 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:50:04.007241 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.073702 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.062313817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.073812 1842604 docker.go:319] overlay module found
	I1216 02:50:04.077006 1842604 out.go:179] * Using the docker driver based on existing profile
	I1216 02:50:04.079902 1842604 start.go:309] selected driver: docker
	I1216 02:50:04.079932 1842604 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.080054 1842604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:50:04.080179 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.136011 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.126842192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.136427 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:04.136482 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:04.136533 1842604 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.139723 1842604 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:50:04.142545 1842604 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:50:04.145565 1842604 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:50:04.148399 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:04.148453 1842604 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:50:04.148469 1842604 cache.go:65] Caching tarball of preloaded images
	I1216 02:50:04.148474 1842604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:50:04.148567 1842604 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:50:04.148577 1842604 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:50:04.148682 1842604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:50:04.168498 1842604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:50:04.168522 1842604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:50:04.168544 1842604 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:50:04.168575 1842604 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:50:04.168643 1842604 start.go:364] duration metric: took 46.539µs to acquireMachinesLock for "functional-389759"
	I1216 02:50:04.168667 1842604 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:50:04.168673 1842604 fix.go:54] fixHost starting: 
	I1216 02:50:04.168962 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:04.192862 1842604 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:50:04.192891 1842604 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:50:04.196202 1842604 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:50:04.196246 1842604 machine.go:94] provisionDockerMachine start ...
	I1216 02:50:04.196329 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.213973 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.214316 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.214325 1842604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:50:04.350600 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.350628 1842604 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:50:04.350691 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.368974 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.369299 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.369316 1842604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:50:04.513062 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.513215 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.531552 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.531870 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.531893 1842604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:50:04.663498 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:50:04.663573 1842604 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:50:04.663612 1842604 ubuntu.go:190] setting up certificates
	I1216 02:50:04.663658 1842604 provision.go:84] configureAuth start
	I1216 02:50:04.663756 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:04.681830 1842604 provision.go:143] copyHostCerts
	I1216 02:50:04.681871 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681914 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:50:04.681921 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681996 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:50:04.682080 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682098 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:50:04.682107 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682134 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:50:04.682171 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682188 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:50:04.682192 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682218 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:50:04.682263 1842604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:50:04.918732 1842604 provision.go:177] copyRemoteCerts
	I1216 02:50:04.918803 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:50:04.918909 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.945401 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.043237 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 02:50:05.043301 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:50:05.061641 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 02:50:05.061702 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:50:05.079841 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 02:50:05.079956 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:50:05.097722 1842604 provision.go:87] duration metric: took 434.019439ms to configureAuth
	I1216 02:50:05.097754 1842604 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:50:05.097953 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:05.097967 1842604 machine.go:97] duration metric: took 901.714132ms to provisionDockerMachine
	I1216 02:50:05.097975 1842604 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:50:05.097987 1842604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:50:05.098051 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:50:05.098102 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.115383 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.211319 1842604 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:50:05.214768 1842604 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 02:50:05.214793 1842604 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 02:50:05.214797 1842604 command_runner.go:130] > VERSION_ID="12"
	I1216 02:50:05.214802 1842604 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 02:50:05.214807 1842604 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 02:50:05.214810 1842604 command_runner.go:130] > ID=debian
	I1216 02:50:05.214815 1842604 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 02:50:05.214820 1842604 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 02:50:05.214826 1842604 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 02:50:05.214871 1842604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:50:05.214894 1842604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:50:05.214911 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:50:05.214973 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:50:05.215088 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:50:05.215101 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /etc/ssl/certs/17983702.pem
	I1216 02:50:05.215203 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:50:05.215211 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> /etc/test/nested/copy/1798370/hosts
	I1216 02:50:05.215287 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:50:05.223273 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:05.241790 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:50:05.259718 1842604 start.go:296] duration metric: took 161.727689ms for postStartSetup
	I1216 02:50:05.259801 1842604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:50:05.259846 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.277760 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.371870 1842604 command_runner.go:130] > 18%
	I1216 02:50:05.372496 1842604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:50:05.377201 1842604 command_runner.go:130] > 161G
	I1216 02:50:05.377708 1842604 fix.go:56] duration metric: took 1.209030723s for fixHost
	I1216 02:50:05.377728 1842604 start.go:83] releasing machines lock for "functional-389759", held for 1.209073027s
	I1216 02:50:05.377811 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:05.395427 1842604 ssh_runner.go:195] Run: cat /version.json
	I1216 02:50:05.395497 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.395795 1842604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:50:05.395856 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.414621 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.417076 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.510754 1842604 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 02:50:05.510902 1842604 ssh_runner.go:195] Run: systemctl --version
	I1216 02:50:05.609923 1842604 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 02:50:05.612841 1842604 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 02:50:05.612896 1842604 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 02:50:05.613034 1842604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 02:50:05.617736 1842604 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 02:50:05.617774 1842604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:50:05.617838 1842604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:50:05.626000 1842604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:50:05.626028 1842604 start.go:496] detecting cgroup driver to use...
	I1216 02:50:05.626059 1842604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:50:05.626109 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:50:05.644077 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:50:05.659636 1842604 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:50:05.659709 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:50:05.676805 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:50:05.692573 1842604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:50:05.816755 1842604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:50:05.944883 1842604 docker.go:234] disabling docker service ...
	I1216 02:50:05.944952 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:50:05.960111 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:50:05.973273 1842604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:50:06.102700 1842604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:50:06.226099 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:50:06.239914 1842604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:50:06.254235 1842604 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 02:50:06.255720 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:50:06.265881 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:50:06.274988 1842604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:50:06.275099 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:50:06.284319 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.293767 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:50:06.302914 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.312051 1842604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:50:06.320364 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:50:06.329464 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:50:06.338574 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:50:06.347623 1842604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:50:06.354520 1842604 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 02:50:06.355609 1842604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:50:06.363468 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:06.501216 1842604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:50:06.641570 1842604 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:50:06.641646 1842604 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:50:06.645599 1842604 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1216 02:50:06.645623 1842604 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 02:50:06.645629 1842604 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1216 02:50:06.645636 1842604 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:06.645642 1842604 command_runner.go:130] > Access: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645647 1842604 command_runner.go:130] > Modify: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645652 1842604 command_runner.go:130] > Change: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645656 1842604 command_runner.go:130] >  Birth: -
	I1216 02:50:06.645685 1842604 start.go:564] Will wait 60s for crictl version
	I1216 02:50:06.645740 1842604 ssh_runner.go:195] Run: which crictl
	I1216 02:50:06.649139 1842604 command_runner.go:130] > /usr/local/bin/crictl
	I1216 02:50:06.649430 1842604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:50:06.676623 1842604 command_runner.go:130] > Version:  0.1.0
	I1216 02:50:06.676645 1842604 command_runner.go:130] > RuntimeName:  containerd
	I1216 02:50:06.676661 1842604 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1216 02:50:06.676671 1842604 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 02:50:06.676683 1842604 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:50:06.676740 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.701508 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.703452 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.721412 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.729453 1842604 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:50:06.732626 1842604 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:50:06.754519 1842604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:50:06.758684 1842604 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 02:50:06.758798 1842604 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:50:06.758921 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:06.758993 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.783030 1842604 command_runner.go:130] > {
	I1216 02:50:06.783088 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.783093 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783103 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.783109 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783114 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.783117 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783121 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783130 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.783133 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783138 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.783142 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783146 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783149 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783152 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783160 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.783163 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783169 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.783172 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783176 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783185 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.783188 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783192 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.783196 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783200 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783204 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783207 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783214 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.783224 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783229 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.783232 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783243 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783251 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.783254 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783258 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.783262 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.783266 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783269 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783272 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783278 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.783282 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783287 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.783290 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783294 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783305 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.783308 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783312 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.783317 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783321 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783324 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783328 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783332 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783337 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783340 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783347 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.783351 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.783359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783363 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783370 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.783374 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783381 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.783384 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783392 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783395 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783399 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783403 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783406 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783409 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783415 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.783419 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783424 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.783427 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783431 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783439 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.783442 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783446 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.783450 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783454 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783457 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783461 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783464 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783467 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783470 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783476 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.783480 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783485 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.783488 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783492 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783499 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.783503 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783506 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.783510 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783514 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783520 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783524 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783530 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.783534 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783540 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.783543 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783546 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783554 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.783557 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.783568 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783572 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783575 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783579 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783582 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783585 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783588 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783595 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.783599 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783604 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.783608 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783611 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783619 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.783622 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783625 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.783629 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783633 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.783636 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783639 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783643 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.783646 1842604 command_runner.go:130] >     }
	I1216 02:50:06.783648 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.783651 1842604 command_runner.go:130] > }
	I1216 02:50:06.785559 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.785577 1842604 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:50:06.785637 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.809062 1842604 command_runner.go:130] > {
	I1216 02:50:06.809080 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.809085 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809094 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.809099 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809105 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.809108 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809112 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809121 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.809125 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809129 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.809133 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809137 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809140 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809143 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809153 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.809157 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809162 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.809166 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809170 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809178 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.809181 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809186 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.809189 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809193 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809196 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809199 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809207 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.809211 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809216 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.809219 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809226 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809235 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.809241 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809246 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.809250 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.809254 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809257 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809260 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809267 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.809270 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809276 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.809279 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809283 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809291 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.809294 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809298 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.809303 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809307 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809311 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809315 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809318 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809322 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809325 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809332 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.809335 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809341 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.809344 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809348 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.809359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809364 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.809367 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809379 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809382 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809386 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809393 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809396 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809399 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809406 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.809410 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809416 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.809419 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809423 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809432 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.809435 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809439 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.809443 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809447 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809450 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809453 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809461 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809464 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809467 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809475 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.809478 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809483 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.809486 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809490 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809498 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.809501 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809505 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.809509 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809513 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809516 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809519 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809526 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.809530 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809535 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.809541 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809545 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809553 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.809556 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.809564 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809568 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809571 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809575 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809579 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809582 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809585 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809591 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.809595 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809599 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.809602 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809606 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809614 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.809616 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809620 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.809624 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809627 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.809632 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809635 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809639 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.809642 1842604 command_runner.go:130] >     }
	I1216 02:50:06.809645 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.809648 1842604 command_runner.go:130] > }
	I1216 02:50:06.811271 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.811300 1842604 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:50:06.811308 1842604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:50:06.811452 1842604 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:50:06.811544 1842604 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:50:06.842167 1842604 command_runner.go:130] > {
	I1216 02:50:06.842188 1842604 command_runner.go:130] >   "cniconfig": {
	I1216 02:50:06.842194 1842604 command_runner.go:130] >     "Networks": [
	I1216 02:50:06.842198 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842203 1842604 command_runner.go:130] >         "Config": {
	I1216 02:50:06.842208 1842604 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1216 02:50:06.842213 1842604 command_runner.go:130] >           "Name": "cni-loopback",
	I1216 02:50:06.842217 1842604 command_runner.go:130] >           "Plugins": [
	I1216 02:50:06.842220 1842604 command_runner.go:130] >             {
	I1216 02:50:06.842224 1842604 command_runner.go:130] >               "Network": {
	I1216 02:50:06.842229 1842604 command_runner.go:130] >                 "ipam": {},
	I1216 02:50:06.842234 1842604 command_runner.go:130] >                 "type": "loopback"
	I1216 02:50:06.842238 1842604 command_runner.go:130] >               },
	I1216 02:50:06.842243 1842604 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1216 02:50:06.842246 1842604 command_runner.go:130] >             }
	I1216 02:50:06.842249 1842604 command_runner.go:130] >           ],
	I1216 02:50:06.842259 1842604 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1216 02:50:06.842263 1842604 command_runner.go:130] >         },
	I1216 02:50:06.842268 1842604 command_runner.go:130] >         "IFName": "lo"
	I1216 02:50:06.842271 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842275 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842279 1842604 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1216 02:50:06.842283 1842604 command_runner.go:130] >     "PluginDirs": [
	I1216 02:50:06.842287 1842604 command_runner.go:130] >       "/opt/cni/bin"
	I1216 02:50:06.842291 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842298 1842604 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1216 02:50:06.842301 1842604 command_runner.go:130] >     "Prefix": "eth"
	I1216 02:50:06.842304 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842308 1842604 command_runner.go:130] >   "config": {
	I1216 02:50:06.842312 1842604 command_runner.go:130] >     "cdiSpecDirs": [
	I1216 02:50:06.842315 1842604 command_runner.go:130] >       "/etc/cdi",
	I1216 02:50:06.842320 1842604 command_runner.go:130] >       "/var/run/cdi"
	I1216 02:50:06.842328 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842331 1842604 command_runner.go:130] >     "cni": {
	I1216 02:50:06.842335 1842604 command_runner.go:130] >       "binDir": "",
	I1216 02:50:06.842338 1842604 command_runner.go:130] >       "binDirs": [
	I1216 02:50:06.842342 1842604 command_runner.go:130] >         "/opt/cni/bin"
	I1216 02:50:06.842345 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.842349 1842604 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1216 02:50:06.842352 1842604 command_runner.go:130] >       "confTemplate": "",
	I1216 02:50:06.842356 1842604 command_runner.go:130] >       "ipPref": "",
	I1216 02:50:06.842359 1842604 command_runner.go:130] >       "maxConfNum": 1,
	I1216 02:50:06.842364 1842604 command_runner.go:130] >       "setupSerially": false,
	I1216 02:50:06.842368 1842604 command_runner.go:130] >       "useInternalLoopback": false
	I1216 02:50:06.842371 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842378 1842604 command_runner.go:130] >     "containerd": {
	I1216 02:50:06.842382 1842604 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1216 02:50:06.842387 1842604 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1216 02:50:06.842392 1842604 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1216 02:50:06.842396 1842604 command_runner.go:130] >       "runtimes": {
	I1216 02:50:06.842399 1842604 command_runner.go:130] >         "runc": {
	I1216 02:50:06.842404 1842604 command_runner.go:130] >           "ContainerAnnotations": null,
	I1216 02:50:06.842415 1842604 command_runner.go:130] >           "PodAnnotations": null,
	I1216 02:50:06.842421 1842604 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1216 02:50:06.842425 1842604 command_runner.go:130] >           "cgroupWritable": false,
	I1216 02:50:06.842429 1842604 command_runner.go:130] >           "cniConfDir": "",
	I1216 02:50:06.842433 1842604 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1216 02:50:06.842436 1842604 command_runner.go:130] >           "io_type": "",
	I1216 02:50:06.842439 1842604 command_runner.go:130] >           "options": {
	I1216 02:50:06.842443 1842604 command_runner.go:130] >             "BinaryName": "",
	I1216 02:50:06.842448 1842604 command_runner.go:130] >             "CriuImagePath": "",
	I1216 02:50:06.842451 1842604 command_runner.go:130] >             "CriuWorkPath": "",
	I1216 02:50:06.842455 1842604 command_runner.go:130] >             "IoGid": 0,
	I1216 02:50:06.842458 1842604 command_runner.go:130] >             "IoUid": 0,
	I1216 02:50:06.842462 1842604 command_runner.go:130] >             "NoNewKeyring": false,
	I1216 02:50:06.842469 1842604 command_runner.go:130] >             "Root": "",
	I1216 02:50:06.842473 1842604 command_runner.go:130] >             "ShimCgroup": "",
	I1216 02:50:06.842480 1842604 command_runner.go:130] >             "SystemdCgroup": false
	I1216 02:50:06.842483 1842604 command_runner.go:130] >           },
	I1216 02:50:06.842488 1842604 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1216 02:50:06.842494 1842604 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1216 02:50:06.842499 1842604 command_runner.go:130] >           "runtimePath": "",
	I1216 02:50:06.842504 1842604 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1216 02:50:06.842508 1842604 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1216 02:50:06.842512 1842604 command_runner.go:130] >           "snapshotter": ""
	I1216 02:50:06.842515 1842604 command_runner.go:130] >         }
	I1216 02:50:06.842518 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842521 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842530 1842604 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1216 02:50:06.842535 1842604 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1216 02:50:06.842541 1842604 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1216 02:50:06.842546 1842604 command_runner.go:130] >     "disableApparmor": false,
	I1216 02:50:06.842550 1842604 command_runner.go:130] >     "disableHugetlbController": true,
	I1216 02:50:06.842554 1842604 command_runner.go:130] >     "disableProcMount": false,
	I1216 02:50:06.842558 1842604 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1216 02:50:06.842562 1842604 command_runner.go:130] >     "enableCDI": true,
	I1216 02:50:06.842565 1842604 command_runner.go:130] >     "enableSelinux": false,
	I1216 02:50:06.842569 1842604 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1216 02:50:06.842573 1842604 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1216 02:50:06.842578 1842604 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1216 02:50:06.842582 1842604 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1216 02:50:06.842586 1842604 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1216 02:50:06.842590 1842604 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1216 02:50:06.842595 1842604 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1216 02:50:06.842600 1842604 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842604 1842604 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1216 02:50:06.842610 1842604 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842614 1842604 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1216 02:50:06.842622 1842604 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1216 02:50:06.842625 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842628 1842604 command_runner.go:130] >   "features": {
	I1216 02:50:06.842632 1842604 command_runner.go:130] >     "supplemental_groups_policy": true
	I1216 02:50:06.842635 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842639 1842604 command_runner.go:130] >   "golang": "go1.24.9",
	I1216 02:50:06.842649 1842604 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842658 1842604 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842662 1842604 command_runner.go:130] >   "runtimeHandlers": [
	I1216 02:50:06.842665 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842668 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842672 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842676 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842679 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842682 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842685 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842688 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842693 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842697 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842700 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842703 1842604 command_runner.go:130] >       "name": "runc"
	I1216 02:50:06.842706 1842604 command_runner.go:130] >     }
	I1216 02:50:06.842709 1842604 command_runner.go:130] >   ],
	I1216 02:50:06.842713 1842604 command_runner.go:130] >   "status": {
	I1216 02:50:06.842716 1842604 command_runner.go:130] >     "conditions": [
	I1216 02:50:06.842719 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842723 1842604 command_runner.go:130] >         "message": "",
	I1216 02:50:06.842730 1842604 command_runner.go:130] >         "reason": "",
	I1216 02:50:06.842734 1842604 command_runner.go:130] >         "status": true,
	I1216 02:50:06.842739 1842604 command_runner.go:130] >         "type": "RuntimeReady"
	I1216 02:50:06.842742 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842745 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842756 1842604 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1216 02:50:06.842764 1842604 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1216 02:50:06.842775 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842779 1842604 command_runner.go:130] >         "type": "NetworkReady"
	I1216 02:50:06.842782 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842785 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842816 1842604 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1216 02:50:06.842831 1842604 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1216 02:50:06.842837 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842848 1842604 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1216 02:50:06.842852 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842855 1842604 command_runner.go:130] >     ]
	I1216 02:50:06.842857 1842604 command_runner.go:130] >   }
	I1216 02:50:06.842860 1842604 command_runner.go:130] > }
	I1216 02:50:06.845895 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:06.845921 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:06.845936 1842604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:50:06.845966 1842604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:50:06.846165 1842604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:50:06.846270 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:50:06.854737 1842604 command_runner.go:130] > kubeadm
	I1216 02:50:06.854757 1842604 command_runner.go:130] > kubectl
	I1216 02:50:06.854762 1842604 command_runner.go:130] > kubelet
	I1216 02:50:06.854790 1842604 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:50:06.854884 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:50:06.863474 1842604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:50:06.877235 1842604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:50:06.893176 1842604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 02:50:06.907542 1842604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:50:06.911554 1842604 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 02:50:06.912008 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.032285 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:07.187841 1842604 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:50:07.187907 1842604 certs.go:195] generating shared ca certs ...
	I1216 02:50:07.187938 1842604 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.188113 1842604 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:50:07.188262 1842604 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:50:07.188293 1842604 certs.go:257] generating profile certs ...
	I1216 02:50:07.188479 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:50:07.188626 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:50:07.188704 1842604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:50:07.188746 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 02:50:07.188833 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 02:50:07.188865 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 02:50:07.188913 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 02:50:07.188955 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 02:50:07.188991 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 02:50:07.189039 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 02:50:07.189094 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 02:50:07.189217 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:50:07.189294 1842604 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:50:07.189332 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:50:07.189413 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:50:07.189488 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:50:07.189568 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:50:07.189665 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:07.189733 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.189792 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem -> /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.189829 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.192734 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:50:07.215749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:50:07.236395 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:50:07.256110 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:50:07.276540 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:50:07.296274 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:50:07.314749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:50:07.333206 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:50:07.351818 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:50:07.370275 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:50:07.390851 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:50:07.409219 1842604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:50:07.421911 1842604 ssh_runner.go:195] Run: openssl version
	I1216 02:50:07.427966 1842604 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 02:50:07.428408 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.436062 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:50:07.443738 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447498 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447742 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447801 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.490768 1842604 command_runner.go:130] > b5213941
	I1216 02:50:07.491273 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:50:07.498894 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.506703 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:50:07.514440 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518338 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518429 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518508 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.559640 1842604 command_runner.go:130] > 51391683
	I1216 02:50:07.560095 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:50:07.567522 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.574982 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:50:07.582626 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586721 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586817 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586878 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.628240 1842604 command_runner.go:130] > 3ec20f2e
	I1216 02:50:07.628688 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:50:07.636367 1842604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640270 1842604 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640300 1842604 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 02:50:07.640307 1842604 command_runner.go:130] > Device: 259,1	Inode: 2346079     Links: 1
	I1216 02:50:07.640313 1842604 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:07.640320 1842604 command_runner.go:130] > Access: 2025-12-16 02:45:59.904024015 +0000
	I1216 02:50:07.640326 1842604 command_runner.go:130] > Modify: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640331 1842604 command_runner.go:130] > Change: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640338 1842604 command_runner.go:130] >  Birth: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640415 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:50:07.685787 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.686316 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:50:07.726862 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.727358 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:50:07.769278 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.769775 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:50:07.810792 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.811300 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:50:07.852245 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.852345 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:50:07.894213 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.894706 1842604 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:07.894832 1842604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:50:07.894910 1842604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:50:07.922900 1842604 cri.go:89] found id: ""
	I1216 02:50:07.922983 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:50:07.930226 1842604 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 02:50:07.930256 1842604 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 02:50:07.930263 1842604 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 02:50:07.931439 1842604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:50:07.931499 1842604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:50:07.931562 1842604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:50:07.943740 1842604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:50:07.944155 1842604 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.944257 1842604 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389759" cluster setting kubeconfig missing "functional-389759" context setting]
	I1216 02:50:07.944564 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.945009 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.945157 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:07.945886 1842604 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 02:50:07.945970 1842604 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 02:50:07.945985 1842604 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 02:50:07.945994 1842604 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 02:50:07.946007 1842604 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 02:50:07.946011 1842604 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 02:50:07.946339 1842604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:50:07.958263 1842604 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 02:50:07.958306 1842604 kubeadm.go:602] duration metric: took 26.787333ms to restartPrimaryControlPlane
	I1216 02:50:07.958316 1842604 kubeadm.go:403] duration metric: took 63.631777ms to StartCluster
	I1216 02:50:07.958333 1842604 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.958427 1842604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.959238 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.959525 1842604 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 02:50:07.959950 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:07.960006 1842604 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 02:50:07.960112 1842604 addons.go:70] Setting storage-provisioner=true in profile "functional-389759"
	I1216 02:50:07.960129 1842604 addons.go:239] Setting addon storage-provisioner=true in "functional-389759"
	I1216 02:50:07.960152 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:07.960945 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.961166 1842604 addons.go:70] Setting default-storageclass=true in profile "functional-389759"
	I1216 02:50:07.961188 1842604 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389759"
	I1216 02:50:07.961453 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.966091 1842604 out.go:179] * Verifying Kubernetes components...
	I1216 02:50:07.968861 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.999405 1842604 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:50:08.003951 1842604 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.003988 1842604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:50:08.004070 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.016743 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:08.016935 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:08.017229 1842604 addons.go:239] Setting addon default-storageclass=true in "functional-389759"
	I1216 02:50:08.017278 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:08.017759 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:08.056545 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.063547 1842604 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.063573 1842604 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:50:08.063643 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.096801 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.182820 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:08.204300 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.216429 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.938921 1842604 node_ready.go:35] waiting up to 6m0s for node "functional-389759" to be "Ready" ...
	I1216 02:50:08.939066 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:08.939127 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939127 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1216 02:50:08.939303 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939364 1842604 retry.go:31] will retry after 371.599151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939463 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:08.939655 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939681 1842604 retry.go:31] will retry after 208.586178ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.149421 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.213240 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.213284 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.213304 1842604 retry.go:31] will retry after 201.914515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.311585 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.373333 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.373376 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.373396 1842604 retry.go:31] will retry after 439.688248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.415509 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.483422 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.483469 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.483489 1842604 retry.go:31] will retry after 841.778226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.814006 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.876109 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.880285 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.880327 1842604 retry.go:31] will retry after 574.892877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.939502 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.939583 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.939923 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.325447 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:10.394946 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.394995 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.395014 1842604 retry.go:31] will retry after 1.198470662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.439106 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.439176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.439428 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.455825 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:10.523765 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.523815 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.523848 1842604 retry.go:31] will retry after 636.325982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.939367 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:10.939781 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:11.161191 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:11.242833 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.242908 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.242943 1842604 retry.go:31] will retry after 1.140424726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.439654 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:11.594053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:11.649408 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.653163 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.653194 1842604 retry.go:31] will retry after 1.344955883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.939594 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.939687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.940009 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.383614 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:12.439264 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.440165 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:12.443835 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.443867 1842604 retry.go:31] will retry after 2.819298169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.939234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.939324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.999127 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:13.066096 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:13.066142 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.066177 1842604 retry.go:31] will retry after 2.29209329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.439591 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.439676 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.440017 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:13.440078 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:13.939859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.939946 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.940333 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.439599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:15.264053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:15.323662 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.327080 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.327109 1842604 retry.go:31] will retry after 3.65241611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.359324 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:15.421588 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.421635 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.421654 1842604 retry.go:31] will retry after 1.62104706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.439778 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.439879 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.440170 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:15.440216 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:15.940008 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.940410 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.440078 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.440155 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.440450 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:17.043912 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:17.099707 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:17.103362 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.103395 1842604 retry.go:31] will retry after 4.481188348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.439835 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.439929 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.440261 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:17.440327 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:17.940004 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.940083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.940382 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.439649 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.939696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.980018 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:19.042087 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:19.045748 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.045786 1842604 retry.go:31] will retry after 3.780614615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:19.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.939337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.939666 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:19.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:20.439426 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.439516 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.439851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:20.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.939502 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.439268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.585043 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:21.648279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:21.648322 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.648342 1842604 retry.go:31] will retry after 5.326379112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.939713 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.940115 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:21.940177 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:22.439859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.439927 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.440196 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:22.826669 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:22.887724 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:22.891256 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.891291 1842604 retry.go:31] will retry after 7.007720529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.939466 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.939552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.939870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.439633 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.439715 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.440036 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.939677 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.939748 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.940008 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:24.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.440005 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.440343 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:24.440400 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:24.939690 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.939766 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.940068 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.439712 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.439799 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.440085 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.939947 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.940024 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.940358 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.439107 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.439185 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.939570 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:26.939627 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:26.975786 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:27.047539 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:27.047579 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.047598 1842604 retry.go:31] will retry after 10.416340882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.439244 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.439321 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:27.939345 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.939450 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.939785 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.439255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.439518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.939274 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.939371 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.939720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:28.939777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:29.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.899356 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:29.940020 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.940094 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.940346 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.975996 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:29.976895 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:29.976922 1842604 retry.go:31] will retry after 13.637319362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:30.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.439575 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:30.939293 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.939381 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:31.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:31.439575 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:31.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.439784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:33.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.439356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:33.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:33.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.439634 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.439714 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.439961 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.939579 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.939653 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:35.439846 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.439925 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.440292 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:35.440352 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:35.940006 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.940080 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.940335 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.440174 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.440258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.440580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.939312 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.939727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.439556 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.464839 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:37.535691 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:37.535727 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.535748 1842604 retry.go:31] will retry after 13.417840341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.939229 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:37.939658 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:38.439518 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.439602 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.439942 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:38.939665 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.939784 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.940059 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.440088 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.440162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.440456 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.939587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:40.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.439491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:40.439540 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:40.939209 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.939552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:42.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.439616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:42.439677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:42.939334 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.615150 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:43.680878 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:43.680928 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.680947 1842604 retry.go:31] will retry after 17.388789533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.939409 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:44.939567 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:45.439260 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.439687 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:45.939401 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.939486 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.939829 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:46.939686 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:47.439343 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.439416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:47.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.439361 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.439738 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:49.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:49.439525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:49.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.439605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.939264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.954020 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:51.020279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:51.020323 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.020343 1842604 retry.go:31] will retry after 13.418822402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.440005 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.440079 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.440420 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:51.440473 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:51.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.939608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.439380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:53.939608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:54.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.439246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:54.939220 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.439239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:55.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:56.439315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:56.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.939434 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.939515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:57.939910 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:58.439684 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.439750 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.440021 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:58.939803 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.939878 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.940159 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.440068 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.440142 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.440488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:00.439702 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:00.939370 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.939452 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.939786 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.070030 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:01.132180 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:01.132233 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.132254 1842604 retry.go:31] will retry after 31.549707812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.439619 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.439687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.439937 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.939692 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.939769 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.940100 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:02.439919 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.439992 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.440273 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:02.440317 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:02.939603 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.939682 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.939996 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.439752 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.439849 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.440174 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.939981 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.940061 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.940401 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.439336 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:04.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.517286 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:04.517332 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.517352 1842604 retry.go:31] will retry after 44.886251271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.939909 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.939982 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.940274 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:04.940323 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:05.440102 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.440174 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.440508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:05.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.939577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.439220 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.939309 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.939386 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:07.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:07.439570 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:07.939211 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.939296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.939674 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.439772 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.439857 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.440214 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.939551 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.939621 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.939875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:09.439806 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.439883 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.440225 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:09.440285 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:09.939882 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.939963 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.940293 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.439792 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.439859 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.440124 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.939880 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.939951 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.940239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:11.439938 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.440017 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.440352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:11.440408 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:11.939683 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.939756 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.940083 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.439886 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.439960 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.440334 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.939160 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.939264 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.939339 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:13.939773 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:14.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.439283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.939497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:16.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:16.439517 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:16.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.439317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.439673 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.939571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:18.439237 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.439312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:18.439701 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:18.939233 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.939653 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.939246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.939513 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:20.939554 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:21.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:21.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.439160 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.939658 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:22.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:23.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:23.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:25.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.439524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:25.439565 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:25.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.439242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.939226 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.939651 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:27.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:27.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:27.939251 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.939341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.939686 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.439565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.439293 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.939142 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.939475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:29.939523 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:30.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.439638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:30.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.439467 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:31.939621 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:32.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:32.683088 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:32.746941 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:32.746981 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.747001 1842604 retry.go:31] will retry after 33.271174209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.939435 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.939505 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.439398 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.439739 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.939479 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.939568 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.939898 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:33.939952 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:34.439808 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.439880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.440131 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:34.939954 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.940033 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.940352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.439094 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.439177 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.439525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:36.439299 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.439392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.439763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:36.439816 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:36.939492 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.939567 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.939939 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.439667 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.439746 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.440054 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.939849 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.939922 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.940282 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.439087 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.439168 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.439498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:38.939564 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:39.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:39.939340 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.439520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.939185 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:40.939662 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:41.439347 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.439746 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:41.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.939550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.939307 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.939384 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:42.939786 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:43.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:43.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.939249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.939486 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:45.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.439295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.439645 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:45.439707 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:45.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:47.439327 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.439404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.439724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:47.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:47.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.439273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:49.404519 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:49.440015 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.440083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.440326 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:49.440365 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:49.475362 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475396 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475480 1842604 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:51:49.939258 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.939331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.439273 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.439350 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.939510 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.939742 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:51.939799 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:52.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.439527 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:52.939172 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.439284 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.439357 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.439683 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:54.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.439675 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:54.439728 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:54.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.939602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.439136 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.439455 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.939488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:56.939531 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:57.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:57.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.439522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.939289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.939635 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:58.939699 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:59.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.439571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:59.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.939462 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.439330 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.439711 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:01.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:01.439533 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:01.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.439326 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.439405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.439723 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.939547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:03.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:03.439656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:03.939333 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.939982 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:04.439870 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.439950 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.441104 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 02:52:04.939902 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.939976 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.940342 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:05.439974 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.440051 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.440366 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:05.440422 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:05.939070 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.939144 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.939427 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.018765 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:52:06.088979 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089023 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089108 1842604 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:52:06.092116 1842604 out.go:179] * Enabled addons: 
	I1216 02:52:06.094111 1842604 addons.go:530] duration metric: took 1m58.134103468s for enable addons: enabled=[]
	I1216 02:52:06.439418 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.439511 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.439875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.939605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.439870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:07.939631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:08.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.439668 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:08.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:09.939712 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:10.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:10.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.439325 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:12.439170 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:12.439614 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:12.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:14.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.439618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:14.439675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:14.939160 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.939166 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.939565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:16.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:16.439751 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:16.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.939474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:18.439316 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.439414 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.439810 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:18.439875 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:18.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.439217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.939594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:20.939572 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:21.439266 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.439341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:21.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.939580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.439532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.939572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:22.939619 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:23.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:23.939140 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.439648 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.939341 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.939745 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:24.939803 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:25.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:25.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:27.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.439562 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:27.439608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:27.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:29.439248 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.439328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.439689 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:29.439742 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:29.939139 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.939549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.939397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.939726 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.439487 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.939215 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.939632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:31.939687 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:32.439388 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.439773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:32.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.939546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.939270 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.939356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:33.939787 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:34.439540 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.439618 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:34.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:36.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.439305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:36.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:36.939319 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.939408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.939776 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.439547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:38.439553 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.439631 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.439986 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:38.440048 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.939530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.939328 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:40.939636 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:41.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:41.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.439574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.939650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:42.939708 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:43.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:43.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.939585 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.439396 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.439479 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.439801 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:45.439245 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.439326 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.439625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:45.439674 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:45.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.939135 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:47.939618 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:48.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.439619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:48.939338 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.939771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.439504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:49.939679 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:50.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:50.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.439257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.939737 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:51.939789 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:52.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:52.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.939262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.439314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.939491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:54.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.439271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:54.439659 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:54.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.939623 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:56.439309 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.439391 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:56.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:56.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.939545 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:58.939617 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:59.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:59.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:00.939649 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:01.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:01.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.439351 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.439735 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.939471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:03.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:03.439650 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:03.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.939395 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.939207 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.939477 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:05.939529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:06.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.439240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:06.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.939724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.439508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.939335 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.939685 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:07.939739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:08.439494 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.439903 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:08.939686 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.939764 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.940063 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.440032 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.440108 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.440422 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:10.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:10.439589 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:10.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.439598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:12.439168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:12.439639 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:12.939298 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.939374 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:14.439387 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.439472 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.439812 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:14.439867 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:14.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.939501 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:16.939656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:17.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.439397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.439744 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:17.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.439237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.439576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.939765 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:18.939822 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:19.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:19.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.439182 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:21.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.439583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:21.439640 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:21.939297 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.939731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.439207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.939299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:23.439346 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.439421 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.439771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:23.439824 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:23.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.939483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.439299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.439632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.939319 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.439591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:25.939638 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:26.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:26.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.939259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:28.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:28.439529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:28.939176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.439313 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.439393 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.439725 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:30.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:30.439655 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:30.939320 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.939418 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.439475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.939534 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:32.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.439620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:32.439676 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:32.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.439245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:34.939677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:35.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:35.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.939263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.439228 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.939277 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.939377 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.939732 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:36.939794 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:37.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:37.939236 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.939314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.439417 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.439490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.439842 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:39.439409 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.439482 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.439830 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:39.439883 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:39.939560 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.939640 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.939973 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.439727 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.439800 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.440066 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.939896 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.939970 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.940284 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:41.440124 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.440201 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.440497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:41.440543 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:41.943162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.943240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.943561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.439267 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.439351 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.939415 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.939490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.939836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.439215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.439499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:43.939661 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:44.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:44.939247 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.939347 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.939662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.439436 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.439789 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.939376 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.939453 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.939756 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:45.939804 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:46.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.439210 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:46.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.939525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.439589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.939329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.939704 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:48.439754 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.439848 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.440188 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:48.440247 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:48.939992 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.940067 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.940363 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.439133 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.439200 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.939646 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.439384 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.439476 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:50.939521 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:51.939222 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.439241 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.439313 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.939395 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.939474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.939857 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:52.939915 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:53.439594 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.439672 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.440023 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:53.939720 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.940050 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.440097 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.440176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.440526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:55.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.439505 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:55.439556 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:55.939269 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.939344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.939702 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.439437 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.439519 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.439881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.939498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:57.439223 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.439303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:57.439739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:57.939423 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.939507 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.939912 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.439813 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.439885 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.440183 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.940058 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.940134 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.940478 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.439567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.939485 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:59.939525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:00.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.439472 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.939561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:01.939606 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:02.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:02.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.444773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1216 02:54:04.444839 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:04.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.939528 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.439610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:06.939675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:07.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.439235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:07.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.939834 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.439782 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.439866 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.440217 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.939622 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.939691 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.939951 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:08.939990 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:09.439901 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.439984 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.440340 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:09.940011 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.940412 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.440017 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.440085 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.440369 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.939087 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.939163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:11.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.439309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.439891 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:11.439947 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:11.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.939684 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.939946 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.439716 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.439790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.440122 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.939802 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.939880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.940191 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:13.439954 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.440025 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.440286 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:13.440326 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:13.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.939162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.439289 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.439670 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.939235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.439219 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.939184 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:15.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:16.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:16.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.939531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.939240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:18.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.439286 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.439581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:18.439631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:18.939322 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.939736 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.439493 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:20.439464 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.439552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.439931 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:20.439986 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:20.939676 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.939743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.940000 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.439750 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.439826 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.440155 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.939838 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.939912 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.940244 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:22.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.440003 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.440320 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:22.440370 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:22.940106 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.940183 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.940523 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.939141 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.439339 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.439743 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.939456 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.939535 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.939838 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:24.939885 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:25.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.439298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:25.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.939504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:27.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:27.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:27.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.439179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.439531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.939224 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.939682 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:29.439497 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:29.439924 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:29.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.939230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.939551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.939364 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.939698 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:31.939756 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:32.439434 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.439515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.439885 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:32.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.439177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.439263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.939412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:33.939818 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:34.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.439568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:34.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.939595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:36.439210 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.439644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:36.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:36.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.939300 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.939615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:38.439345 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.439428 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.439811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:38.439872 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:38.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.939515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.439608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.939380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.939747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.439481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:40.939620 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:41.439300 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.439721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:41.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.439561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.939360 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.939717 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:42.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:43.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.439482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:43.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.939158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:45.439421 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.439558 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.440302 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:45.440450 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:45.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.939642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.439549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.940079 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.940162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.940421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:47.940463 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.439275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:48.939330 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.939700 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.439553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:50.439334 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:50.439774 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:50.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.439563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.439158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.439503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:52.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:53.439238 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.439324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:53.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.939529 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.439450 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.439530 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.439887 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.939607 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.939685 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:54.940065 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:55.439530 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.439603 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.439907 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.439329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.439662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.939519 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:57.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.439641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:57.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:57.939371 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.939446 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.939781 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:59.939684 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:00.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.439758 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:00.939212 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.939641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:01.939706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:02.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.439808 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:02.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.439530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:04.439579 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:04.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.939672 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.439280 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.439751 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.939466 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:06.439161 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:06.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:06.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.939644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.439279 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.439353 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.439621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.939358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.939442 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.939811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:08.439810 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.439889 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.440239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:08.440291 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:08.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.939681 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.939929 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.439800 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.439881 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.440208 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.939526 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.939604 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.939943 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.439719 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.439792 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.440067 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.939812 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.939897 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.940243 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:10.940297 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:11.440018 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.440100 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.440421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:11.939675 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.939759 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.940020 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.439848 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.439919 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.440237 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.940077 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.940153 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.940509 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:12.940583 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:13.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:13.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.939610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.439597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.939566 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:15.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.439663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:15.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:15.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.939245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.939569 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.439533 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:17.439324 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:17.439815 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:17.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.939524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.439468 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:19.939616 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:20.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.439548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:20.939137 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.939207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.939465 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.439657 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:21.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:22.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.439535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.439305 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.439389 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.439720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.939542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:24.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:24.439651 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:24.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.939598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.439463 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.439089 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.439163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:26.939524 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:27.439214 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.439661 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:27.939373 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.939451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.939805 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.939221 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.939614 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:28.939667 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:29.439402 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.439474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.439787 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:29.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.439272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.939323 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.939401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.939721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:30.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:31.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:31.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.939328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.939663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.439376 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.439451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.439836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.939913 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.939990 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.940359 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:32.940407 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:33.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.439212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:33.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.439249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.439588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:35.439211 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:35.439710 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:35.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.939499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:37.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:38.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.439750 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:38.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.439559 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:40.439611 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:40.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.939558 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.439280 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:42.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:42.439668 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:42.939362 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.939437 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.939760 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.439213 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:44.439226 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:44.439691 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:44.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.439572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.939300 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.939383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.939237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:46.939647 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:47.439293 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.439366 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.439696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:47.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.939541 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.939238 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.939317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:49.439174 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.439543 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:49.439584 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:49.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.439290 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.943201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.943274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.943612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:51.439328 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:51.439790 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:51.939213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.939297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.939626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.439526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.939601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.439284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.939494 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:53.939534 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:54.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.439304 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:54.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.939622 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.439163 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.439516 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.939231 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.939312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.939665 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:55.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:56.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.439461 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.439770 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:56.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.439287 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.439372 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.939469 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.939545 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.939881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:57.939934 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:58.439663 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.439743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.440003 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:58.939830 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.939903 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.940228 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.439135 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.939583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:00.439256 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.439337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.439709 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:00.439775 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:00.939210 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.439602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:02.939560 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:03.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.439540 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:03.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.939867 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:04.939916 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:05.439564 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.439639 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.439983 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:05.939766 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.939842 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.940108 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.439869 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.439941 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.440295 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.940114 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.940198 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.940608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:06.940665 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:07.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:07.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.939618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.439510 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:08.439620 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:08.440275 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.939119 1842604 node_ready.go:38] duration metric: took 6m0.000151723s for node "functional-389759" to be "Ready" ...
	I1216 02:56:08.942443 1842604 out.go:203] 
	W1216 02:56:08.945313 1842604 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 02:56:08.945510 1842604 out.go:285] * 
	W1216 02:56:08.947818 1842604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:56:08.950773 1842604 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582071213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582085769Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582153534Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582172118Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582182621Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582193755Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582203732Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582218985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582235043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582268732Z" level=info msg="Connect containerd service"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582549853Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.583245998Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.601954104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.602161822Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.602278620Z" level=info msg="Start recovering state"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.602242067Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638073037Z" level=info msg="Start event monitor"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638125656Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638137889Z" level=info msg="Start streaming server"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638147555Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638156260Z" level=info msg="runtime interface starting up..."
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638163350Z" level=info msg="starting plugins..."
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638176322Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:50:06 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.639112317Z" level=info msg="containerd successfully booted in 0.078687s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:56:10.837607    8464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:10.838395    8464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:10.839942    8464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:10.840267    8464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:10.841721    8464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:56:10 up  8:38,  0 user,  load average: 0.29, 0.28, 0.81
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 02:56:07 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:08 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 16 02:56:08 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:08 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:08 functional-389759 kubelet[8347]: E1216 02:56:08.239229    8347 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:08 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:08 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:08 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 16 02:56:08 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:08 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:09 functional-389759 kubelet[8352]: E1216 02:56:09.023452    8352 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:09 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:09 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:09 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 16 02:56:09 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:09 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:09 functional-389759 kubelet[8358]: E1216 02:56:09.758953    8358 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:09 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:09 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:10 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 16 02:56:10 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:10 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:10 functional-389759 kubelet[8379]: E1216 02:56:10.497579    8379 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:10 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:10 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (327.94054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-389759 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-389759 get po -A: exit status 1 (70.271171ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-389759 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-389759 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-389759 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (301.73862ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-853651 addons list                                                                                                                           │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ addons         │ functional-853651 addons list -o json                                                                                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node-connect --url                                                                                                      │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-853651 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                                         │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-853651 --alsologtostderr -v=1                                                                                          │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service list                                                                                                                          │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service list -o json                                                                                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service --namespace=default --https --url hello-node                                                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node --url --format={{.IP}}                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ service        │ functional-853651 service hello-node --url                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format short --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format yaml --alsologtostderr                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ ssh            │ functional-853651 ssh pgrep buildkitd                                                                                                                   │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ image          │ functional-853651 image ls --format json --alsologtostderr                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format table --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls                                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ delete         │ -p functional-853651                                                                                                                                    │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-389759 --alsologtostderr -v=8                                                                                                             │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:50 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:50:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:50:03.940449 1842604 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:50:03.940640 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.940666 1842604 out.go:374] Setting ErrFile to fd 2...
	I1216 02:50:03.940685 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.941001 1842604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:50:03.941424 1842604 out.go:368] Setting JSON to false
	I1216 02:50:03.942302 1842604 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30748,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:50:03.942395 1842604 start.go:143] virtualization:  
	I1216 02:50:03.948050 1842604 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:50:03.951289 1842604 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:50:03.951381 1842604 notify.go:221] Checking for updates...
	I1216 02:50:03.954734 1842604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:50:03.957600 1842604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:03.960611 1842604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:50:03.963508 1842604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:50:03.966329 1842604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:50:03.969672 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:03.969806 1842604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:50:04.007031 1842604 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:50:04.007241 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.073702 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.062313817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.073812 1842604 docker.go:319] overlay module found
	I1216 02:50:04.077006 1842604 out.go:179] * Using the docker driver based on existing profile
	I1216 02:50:04.079902 1842604 start.go:309] selected driver: docker
	I1216 02:50:04.079932 1842604 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.080054 1842604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:50:04.080179 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.136011 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.126842192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.136427 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:04.136482 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:04.136533 1842604 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.139723 1842604 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:50:04.142545 1842604 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:50:04.145565 1842604 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:50:04.148399 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:04.148453 1842604 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:50:04.148469 1842604 cache.go:65] Caching tarball of preloaded images
	I1216 02:50:04.148474 1842604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:50:04.148567 1842604 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:50:04.148577 1842604 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:50:04.148682 1842604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:50:04.168498 1842604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:50:04.168522 1842604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:50:04.168544 1842604 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:50:04.168575 1842604 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:50:04.168643 1842604 start.go:364] duration metric: took 46.539µs to acquireMachinesLock for "functional-389759"
	I1216 02:50:04.168667 1842604 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:50:04.168673 1842604 fix.go:54] fixHost starting: 
	I1216 02:50:04.168962 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:04.192862 1842604 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:50:04.192891 1842604 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:50:04.196202 1842604 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:50:04.196246 1842604 machine.go:94] provisionDockerMachine start ...
	I1216 02:50:04.196329 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.213973 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.214316 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.214325 1842604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:50:04.350600 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.350628 1842604 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:50:04.350691 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.368974 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.369299 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.369316 1842604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:50:04.513062 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.513215 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.531552 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.531870 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.531893 1842604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:50:04.663498 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:50:04.663573 1842604 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:50:04.663612 1842604 ubuntu.go:190] setting up certificates
	I1216 02:50:04.663658 1842604 provision.go:84] configureAuth start
	I1216 02:50:04.663756 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:04.681830 1842604 provision.go:143] copyHostCerts
	I1216 02:50:04.681871 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681914 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:50:04.681921 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681996 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:50:04.682080 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682098 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:50:04.682107 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682134 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:50:04.682171 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682188 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:50:04.682192 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682218 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:50:04.682263 1842604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:50:04.918732 1842604 provision.go:177] copyRemoteCerts
	I1216 02:50:04.918803 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:50:04.918909 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.945401 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.043237 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 02:50:05.043301 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:50:05.061641 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 02:50:05.061702 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:50:05.079841 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 02:50:05.079956 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:50:05.097722 1842604 provision.go:87] duration metric: took 434.019439ms to configureAuth
	I1216 02:50:05.097754 1842604 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:50:05.097953 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:05.097967 1842604 machine.go:97] duration metric: took 901.714132ms to provisionDockerMachine
	I1216 02:50:05.097975 1842604 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:50:05.097987 1842604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:50:05.098051 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:50:05.098102 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.115383 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.211319 1842604 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:50:05.214768 1842604 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 02:50:05.214793 1842604 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 02:50:05.214797 1842604 command_runner.go:130] > VERSION_ID="12"
	I1216 02:50:05.214802 1842604 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 02:50:05.214807 1842604 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 02:50:05.214810 1842604 command_runner.go:130] > ID=debian
	I1216 02:50:05.214815 1842604 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 02:50:05.214820 1842604 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 02:50:05.214826 1842604 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 02:50:05.214871 1842604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:50:05.214894 1842604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:50:05.214911 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:50:05.214973 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:50:05.215088 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:50:05.215101 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /etc/ssl/certs/17983702.pem
	I1216 02:50:05.215203 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:50:05.215211 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> /etc/test/nested/copy/1798370/hosts
	I1216 02:50:05.215287 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:50:05.223273 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:05.241790 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:50:05.259718 1842604 start.go:296] duration metric: took 161.727689ms for postStartSetup
	I1216 02:50:05.259801 1842604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:50:05.259846 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.277760 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.371870 1842604 command_runner.go:130] > 18%
	I1216 02:50:05.372496 1842604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:50:05.377201 1842604 command_runner.go:130] > 161G
	I1216 02:50:05.377708 1842604 fix.go:56] duration metric: took 1.209030723s for fixHost
	I1216 02:50:05.377728 1842604 start.go:83] releasing machines lock for "functional-389759", held for 1.209073027s
	I1216 02:50:05.377811 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:05.395427 1842604 ssh_runner.go:195] Run: cat /version.json
	I1216 02:50:05.395497 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.395795 1842604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:50:05.395856 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.414621 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.417076 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.510754 1842604 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 02:50:05.510902 1842604 ssh_runner.go:195] Run: systemctl --version
	I1216 02:50:05.609923 1842604 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 02:50:05.612841 1842604 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 02:50:05.612896 1842604 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 02:50:05.613034 1842604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 02:50:05.617736 1842604 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 02:50:05.617774 1842604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:50:05.617838 1842604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:50:05.626000 1842604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:50:05.626028 1842604 start.go:496] detecting cgroup driver to use...
	I1216 02:50:05.626059 1842604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:50:05.626109 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:50:05.644077 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:50:05.659636 1842604 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:50:05.659709 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:50:05.676805 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:50:05.692573 1842604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:50:05.816755 1842604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:50:05.944883 1842604 docker.go:234] disabling docker service ...
	I1216 02:50:05.944952 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:50:05.960111 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:50:05.973273 1842604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:50:06.102700 1842604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:50:06.226099 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:50:06.239914 1842604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:50:06.254235 1842604 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 02:50:06.255720 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:50:06.265881 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:50:06.274988 1842604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:50:06.275099 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:50:06.284319 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.293767 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:50:06.302914 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.312051 1842604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:50:06.320364 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:50:06.329464 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:50:06.338574 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:50:06.347623 1842604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:50:06.354520 1842604 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 02:50:06.355609 1842604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:50:06.363468 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:06.501216 1842604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:50:06.641570 1842604 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:50:06.641646 1842604 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:50:06.645599 1842604 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1216 02:50:06.645623 1842604 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 02:50:06.645629 1842604 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1216 02:50:06.645636 1842604 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:06.645642 1842604 command_runner.go:130] > Access: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645647 1842604 command_runner.go:130] > Modify: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645652 1842604 command_runner.go:130] > Change: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645656 1842604 command_runner.go:130] >  Birth: -
	I1216 02:50:06.645685 1842604 start.go:564] Will wait 60s for crictl version
	I1216 02:50:06.645740 1842604 ssh_runner.go:195] Run: which crictl
	I1216 02:50:06.649139 1842604 command_runner.go:130] > /usr/local/bin/crictl
	I1216 02:50:06.649430 1842604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:50:06.676623 1842604 command_runner.go:130] > Version:  0.1.0
	I1216 02:50:06.676645 1842604 command_runner.go:130] > RuntimeName:  containerd
	I1216 02:50:06.676661 1842604 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1216 02:50:06.676671 1842604 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 02:50:06.676683 1842604 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:50:06.676740 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.701508 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.703452 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.721412 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.729453 1842604 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:50:06.732626 1842604 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:50:06.754519 1842604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:50:06.758684 1842604 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 02:50:06.758798 1842604 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:50:06.758921 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:06.758993 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.783030 1842604 command_runner.go:130] > {
	I1216 02:50:06.783088 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.783093 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783103 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.783109 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783114 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.783117 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783121 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783130 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.783133 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783138 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.783142 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783146 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783149 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783152 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783160 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.783163 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783169 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.783172 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783176 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783185 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.783188 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783192 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.783196 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783200 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783204 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783207 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783214 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.783224 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783229 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.783232 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783243 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783251 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.783254 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783258 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.783262 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.783266 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783269 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783272 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783278 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.783282 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783287 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.783290 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783294 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783305 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.783308 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783312 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.783317 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783321 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783324 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783328 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783332 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783337 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783340 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783347 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.783351 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.783359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783363 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783370 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.783374 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783381 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.783384 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783392 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783395 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783399 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783403 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783406 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783409 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783415 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.783419 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783424 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.783427 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783431 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783439 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.783442 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783446 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.783450 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783454 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783457 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783461 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783464 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783467 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783470 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783476 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.783480 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783485 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.783488 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783492 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783499 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.783503 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783506 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.783510 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783514 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783520 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783524 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783530 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.783534 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783540 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.783543 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783546 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783554 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.783557 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.783568 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783572 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783575 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783579 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783582 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783585 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783588 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783595 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.783599 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783604 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.783608 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783611 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783619 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.783622 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783625 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.783629 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783633 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.783636 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783639 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783643 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.783646 1842604 command_runner.go:130] >     }
	I1216 02:50:06.783648 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.783651 1842604 command_runner.go:130] > }
	I1216 02:50:06.785559 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.785577 1842604 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:50:06.785637 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.809062 1842604 command_runner.go:130] > {
	I1216 02:50:06.809080 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.809085 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809094 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.809099 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809105 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.809108 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809112 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809121 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.809125 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809129 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.809133 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809137 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809140 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809143 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809153 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.809157 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809162 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.809166 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809170 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809178 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.809181 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809186 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.809189 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809193 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809196 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809199 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809207 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.809211 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809216 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.809219 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809226 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809235 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.809241 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809246 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.809250 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.809254 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809257 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809260 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809267 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.809270 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809276 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.809279 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809283 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809291 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.809294 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809298 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.809303 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809307 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809311 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809315 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809318 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809322 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809325 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809332 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.809335 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809341 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.809344 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809348 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.809359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809364 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.809367 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809379 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809382 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809386 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809393 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809396 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809399 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809406 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.809410 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809416 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.809419 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809423 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809432 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.809435 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809439 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.809443 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809447 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809450 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809453 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809461 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809464 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809467 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809475 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.809478 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809483 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.809486 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809490 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809498 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.809501 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809505 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.809509 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809513 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809516 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809519 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809526 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.809530 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809535 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.809541 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809545 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809553 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.809556 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.809564 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809568 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809571 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809575 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809579 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809582 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809585 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809591 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.809595 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809599 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.809602 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809606 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809614 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.809616 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809620 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.809624 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809627 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.809632 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809635 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809639 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.809642 1842604 command_runner.go:130] >     }
	I1216 02:50:06.809645 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.809648 1842604 command_runner.go:130] > }
	I1216 02:50:06.811271 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.811300 1842604 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:50:06.811308 1842604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:50:06.811452 1842604 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:50:06.811544 1842604 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:50:06.842167 1842604 command_runner.go:130] > {
	I1216 02:50:06.842188 1842604 command_runner.go:130] >   "cniconfig": {
	I1216 02:50:06.842194 1842604 command_runner.go:130] >     "Networks": [
	I1216 02:50:06.842198 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842203 1842604 command_runner.go:130] >         "Config": {
	I1216 02:50:06.842208 1842604 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1216 02:50:06.842213 1842604 command_runner.go:130] >           "Name": "cni-loopback",
	I1216 02:50:06.842217 1842604 command_runner.go:130] >           "Plugins": [
	I1216 02:50:06.842220 1842604 command_runner.go:130] >             {
	I1216 02:50:06.842224 1842604 command_runner.go:130] >               "Network": {
	I1216 02:50:06.842229 1842604 command_runner.go:130] >                 "ipam": {},
	I1216 02:50:06.842234 1842604 command_runner.go:130] >                 "type": "loopback"
	I1216 02:50:06.842238 1842604 command_runner.go:130] >               },
	I1216 02:50:06.842243 1842604 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1216 02:50:06.842246 1842604 command_runner.go:130] >             }
	I1216 02:50:06.842249 1842604 command_runner.go:130] >           ],
	I1216 02:50:06.842259 1842604 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1216 02:50:06.842263 1842604 command_runner.go:130] >         },
	I1216 02:50:06.842268 1842604 command_runner.go:130] >         "IFName": "lo"
	I1216 02:50:06.842271 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842275 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842279 1842604 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1216 02:50:06.842283 1842604 command_runner.go:130] >     "PluginDirs": [
	I1216 02:50:06.842287 1842604 command_runner.go:130] >       "/opt/cni/bin"
	I1216 02:50:06.842291 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842298 1842604 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1216 02:50:06.842301 1842604 command_runner.go:130] >     "Prefix": "eth"
	I1216 02:50:06.842304 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842308 1842604 command_runner.go:130] >   "config": {
	I1216 02:50:06.842312 1842604 command_runner.go:130] >     "cdiSpecDirs": [
	I1216 02:50:06.842315 1842604 command_runner.go:130] >       "/etc/cdi",
	I1216 02:50:06.842320 1842604 command_runner.go:130] >       "/var/run/cdi"
	I1216 02:50:06.842328 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842331 1842604 command_runner.go:130] >     "cni": {
	I1216 02:50:06.842335 1842604 command_runner.go:130] >       "binDir": "",
	I1216 02:50:06.842338 1842604 command_runner.go:130] >       "binDirs": [
	I1216 02:50:06.842342 1842604 command_runner.go:130] >         "/opt/cni/bin"
	I1216 02:50:06.842345 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.842349 1842604 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1216 02:50:06.842352 1842604 command_runner.go:130] >       "confTemplate": "",
	I1216 02:50:06.842356 1842604 command_runner.go:130] >       "ipPref": "",
	I1216 02:50:06.842359 1842604 command_runner.go:130] >       "maxConfNum": 1,
	I1216 02:50:06.842364 1842604 command_runner.go:130] >       "setupSerially": false,
	I1216 02:50:06.842368 1842604 command_runner.go:130] >       "useInternalLoopback": false
	I1216 02:50:06.842371 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842378 1842604 command_runner.go:130] >     "containerd": {
	I1216 02:50:06.842382 1842604 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1216 02:50:06.842387 1842604 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1216 02:50:06.842392 1842604 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1216 02:50:06.842396 1842604 command_runner.go:130] >       "runtimes": {
	I1216 02:50:06.842399 1842604 command_runner.go:130] >         "runc": {
	I1216 02:50:06.842404 1842604 command_runner.go:130] >           "ContainerAnnotations": null,
	I1216 02:50:06.842415 1842604 command_runner.go:130] >           "PodAnnotations": null,
	I1216 02:50:06.842421 1842604 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1216 02:50:06.842425 1842604 command_runner.go:130] >           "cgroupWritable": false,
	I1216 02:50:06.842429 1842604 command_runner.go:130] >           "cniConfDir": "",
	I1216 02:50:06.842433 1842604 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1216 02:50:06.842436 1842604 command_runner.go:130] >           "io_type": "",
	I1216 02:50:06.842439 1842604 command_runner.go:130] >           "options": {
	I1216 02:50:06.842443 1842604 command_runner.go:130] >             "BinaryName": "",
	I1216 02:50:06.842448 1842604 command_runner.go:130] >             "CriuImagePath": "",
	I1216 02:50:06.842451 1842604 command_runner.go:130] >             "CriuWorkPath": "",
	I1216 02:50:06.842455 1842604 command_runner.go:130] >             "IoGid": 0,
	I1216 02:50:06.842458 1842604 command_runner.go:130] >             "IoUid": 0,
	I1216 02:50:06.842462 1842604 command_runner.go:130] >             "NoNewKeyring": false,
	I1216 02:50:06.842469 1842604 command_runner.go:130] >             "Root": "",
	I1216 02:50:06.842473 1842604 command_runner.go:130] >             "ShimCgroup": "",
	I1216 02:50:06.842480 1842604 command_runner.go:130] >             "SystemdCgroup": false
	I1216 02:50:06.842483 1842604 command_runner.go:130] >           },
	I1216 02:50:06.842488 1842604 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1216 02:50:06.842494 1842604 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1216 02:50:06.842499 1842604 command_runner.go:130] >           "runtimePath": "",
	I1216 02:50:06.842504 1842604 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1216 02:50:06.842508 1842604 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1216 02:50:06.842512 1842604 command_runner.go:130] >           "snapshotter": ""
	I1216 02:50:06.842515 1842604 command_runner.go:130] >         }
	I1216 02:50:06.842518 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842521 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842530 1842604 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1216 02:50:06.842535 1842604 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1216 02:50:06.842541 1842604 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1216 02:50:06.842546 1842604 command_runner.go:130] >     "disableApparmor": false,
	I1216 02:50:06.842550 1842604 command_runner.go:130] >     "disableHugetlbController": true,
	I1216 02:50:06.842554 1842604 command_runner.go:130] >     "disableProcMount": false,
	I1216 02:50:06.842558 1842604 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1216 02:50:06.842562 1842604 command_runner.go:130] >     "enableCDI": true,
	I1216 02:50:06.842565 1842604 command_runner.go:130] >     "enableSelinux": false,
	I1216 02:50:06.842569 1842604 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1216 02:50:06.842573 1842604 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1216 02:50:06.842578 1842604 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1216 02:50:06.842582 1842604 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1216 02:50:06.842586 1842604 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1216 02:50:06.842590 1842604 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1216 02:50:06.842595 1842604 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1216 02:50:06.842600 1842604 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842604 1842604 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1216 02:50:06.842610 1842604 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842614 1842604 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1216 02:50:06.842622 1842604 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1216 02:50:06.842625 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842628 1842604 command_runner.go:130] >   "features": {
	I1216 02:50:06.842632 1842604 command_runner.go:130] >     "supplemental_groups_policy": true
	I1216 02:50:06.842635 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842639 1842604 command_runner.go:130] >   "golang": "go1.24.9",
	I1216 02:50:06.842649 1842604 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842658 1842604 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842662 1842604 command_runner.go:130] >   "runtimeHandlers": [
	I1216 02:50:06.842665 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842668 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842672 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842676 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842679 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842682 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842685 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842688 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842693 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842697 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842700 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842703 1842604 command_runner.go:130] >       "name": "runc"
	I1216 02:50:06.842706 1842604 command_runner.go:130] >     }
	I1216 02:50:06.842709 1842604 command_runner.go:130] >   ],
	I1216 02:50:06.842713 1842604 command_runner.go:130] >   "status": {
	I1216 02:50:06.842716 1842604 command_runner.go:130] >     "conditions": [
	I1216 02:50:06.842719 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842723 1842604 command_runner.go:130] >         "message": "",
	I1216 02:50:06.842730 1842604 command_runner.go:130] >         "reason": "",
	I1216 02:50:06.842734 1842604 command_runner.go:130] >         "status": true,
	I1216 02:50:06.842739 1842604 command_runner.go:130] >         "type": "RuntimeReady"
	I1216 02:50:06.842742 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842745 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842756 1842604 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1216 02:50:06.842764 1842604 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1216 02:50:06.842775 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842779 1842604 command_runner.go:130] >         "type": "NetworkReady"
	I1216 02:50:06.842782 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842785 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842816 1842604 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1216 02:50:06.842831 1842604 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1216 02:50:06.842837 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842848 1842604 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1216 02:50:06.842852 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842855 1842604 command_runner.go:130] >     ]
	I1216 02:50:06.842857 1842604 command_runner.go:130] >   }
	I1216 02:50:06.842860 1842604 command_runner.go:130] > }
	I1216 02:50:06.845895 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:06.845921 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:06.845936 1842604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:50:06.845966 1842604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:50:06.846165 1842604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:50:06.846270 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:50:06.854737 1842604 command_runner.go:130] > kubeadm
	I1216 02:50:06.854757 1842604 command_runner.go:130] > kubectl
	I1216 02:50:06.854762 1842604 command_runner.go:130] > kubelet
	I1216 02:50:06.854790 1842604 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:50:06.854884 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:50:06.863474 1842604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:50:06.877235 1842604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:50:06.893176 1842604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 02:50:06.907542 1842604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:50:06.911554 1842604 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 02:50:06.912008 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.032285 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:07.187841 1842604 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:50:07.187907 1842604 certs.go:195] generating shared ca certs ...
	I1216 02:50:07.187938 1842604 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.188113 1842604 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:50:07.188262 1842604 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:50:07.188293 1842604 certs.go:257] generating profile certs ...
	I1216 02:50:07.188479 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:50:07.188626 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:50:07.188704 1842604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:50:07.188746 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 02:50:07.188833 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 02:50:07.188865 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 02:50:07.188913 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 02:50:07.188955 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 02:50:07.188991 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 02:50:07.189039 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 02:50:07.189094 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 02:50:07.189217 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:50:07.189294 1842604 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:50:07.189332 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:50:07.189413 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:50:07.189488 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:50:07.189568 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:50:07.189665 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:07.189733 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.189792 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem -> /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.189829 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.192734 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:50:07.215749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:50:07.236395 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:50:07.256110 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:50:07.276540 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:50:07.296274 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:50:07.314749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:50:07.333206 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:50:07.351818 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:50:07.370275 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:50:07.390851 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:50:07.409219 1842604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:50:07.421911 1842604 ssh_runner.go:195] Run: openssl version
	I1216 02:50:07.427966 1842604 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 02:50:07.428408 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.436062 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:50:07.443738 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447498 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447742 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447801 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.490768 1842604 command_runner.go:130] > b5213941
	I1216 02:50:07.491273 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:50:07.498894 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.506703 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:50:07.514440 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518338 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518429 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518508 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.559640 1842604 command_runner.go:130] > 51391683
	I1216 02:50:07.560095 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:50:07.567522 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.574982 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:50:07.582626 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586721 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586817 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586878 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.628240 1842604 command_runner.go:130] > 3ec20f2e
	I1216 02:50:07.628688 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:50:07.636367 1842604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640270 1842604 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640300 1842604 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 02:50:07.640307 1842604 command_runner.go:130] > Device: 259,1	Inode: 2346079     Links: 1
	I1216 02:50:07.640313 1842604 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:07.640320 1842604 command_runner.go:130] > Access: 2025-12-16 02:45:59.904024015 +0000
	I1216 02:50:07.640326 1842604 command_runner.go:130] > Modify: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640331 1842604 command_runner.go:130] > Change: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640338 1842604 command_runner.go:130] >  Birth: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640415 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:50:07.685787 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.686316 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:50:07.726862 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.727358 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:50:07.769278 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.769775 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:50:07.810792 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.811300 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:50:07.852245 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.852345 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:50:07.894213 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.894706 1842604 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:07.894832 1842604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:50:07.894910 1842604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:50:07.922900 1842604 cri.go:89] found id: ""
	I1216 02:50:07.922983 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:50:07.930226 1842604 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 02:50:07.930256 1842604 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 02:50:07.930263 1842604 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 02:50:07.931439 1842604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:50:07.931499 1842604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:50:07.931562 1842604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:50:07.943740 1842604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:50:07.944155 1842604 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.944257 1842604 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389759" cluster setting kubeconfig missing "functional-389759" context setting]
	I1216 02:50:07.944564 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.945009 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.945157 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:07.945886 1842604 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 02:50:07.945970 1842604 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 02:50:07.945985 1842604 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 02:50:07.945994 1842604 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 02:50:07.946007 1842604 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 02:50:07.946011 1842604 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 02:50:07.946339 1842604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:50:07.958263 1842604 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 02:50:07.958306 1842604 kubeadm.go:602] duration metric: took 26.787333ms to restartPrimaryControlPlane
	I1216 02:50:07.958316 1842604 kubeadm.go:403] duration metric: took 63.631777ms to StartCluster
	I1216 02:50:07.958333 1842604 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.958427 1842604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.959238 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.959525 1842604 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 02:50:07.959950 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:07.960006 1842604 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 02:50:07.960112 1842604 addons.go:70] Setting storage-provisioner=true in profile "functional-389759"
	I1216 02:50:07.960129 1842604 addons.go:239] Setting addon storage-provisioner=true in "functional-389759"
	I1216 02:50:07.960152 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:07.960945 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.961166 1842604 addons.go:70] Setting default-storageclass=true in profile "functional-389759"
	I1216 02:50:07.961188 1842604 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389759"
	I1216 02:50:07.961453 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.966091 1842604 out.go:179] * Verifying Kubernetes components...
	I1216 02:50:07.968861 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.999405 1842604 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:50:08.003951 1842604 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.003988 1842604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:50:08.004070 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.016743 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:08.016935 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:08.017229 1842604 addons.go:239] Setting addon default-storageclass=true in "functional-389759"
	I1216 02:50:08.017278 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:08.017759 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:08.056545 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.063547 1842604 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.063573 1842604 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:50:08.063643 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.096801 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.182820 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:08.204300 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.216429 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.938921 1842604 node_ready.go:35] waiting up to 6m0s for node "functional-389759" to be "Ready" ...
	I1216 02:50:08.939066 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:08.939127 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939127 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1216 02:50:08.939303 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939364 1842604 retry.go:31] will retry after 371.599151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939463 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:08.939655 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939681 1842604 retry.go:31] will retry after 208.586178ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.149421 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.213240 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.213284 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.213304 1842604 retry.go:31] will retry after 201.914515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.311585 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.373333 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.373376 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.373396 1842604 retry.go:31] will retry after 439.688248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.415509 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.483422 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.483469 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.483489 1842604 retry.go:31] will retry after 841.778226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.814006 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.876109 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.880285 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.880327 1842604 retry.go:31] will retry after 574.892877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.939502 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.939583 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.939923 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.325447 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:10.394946 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.394995 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.395014 1842604 retry.go:31] will retry after 1.198470662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.439106 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.439176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.439428 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.455825 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:10.523765 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.523815 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.523848 1842604 retry.go:31] will retry after 636.325982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.939367 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:10.939781 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:11.161191 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:11.242833 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.242908 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.242943 1842604 retry.go:31] will retry after 1.140424726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.439654 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:11.594053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:11.649408 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.653163 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.653194 1842604 retry.go:31] will retry after 1.344955883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.939594 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.939687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.940009 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.383614 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:12.439264 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.440165 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:12.443835 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.443867 1842604 retry.go:31] will retry after 2.819298169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.939234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.939324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.999127 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:13.066096 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:13.066142 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.066177 1842604 retry.go:31] will retry after 2.29209329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.439591 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.439676 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.440017 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:13.440078 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:13.939859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.939946 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.940333 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.439599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:15.264053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:15.323662 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.327080 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.327109 1842604 retry.go:31] will retry after 3.65241611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.359324 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:15.421588 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.421635 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.421654 1842604 retry.go:31] will retry after 1.62104706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.439778 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.439879 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.440170 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:15.440216 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:15.940008 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.940410 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.440078 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.440155 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.440450 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:17.043912 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:17.099707 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:17.103362 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.103395 1842604 retry.go:31] will retry after 4.481188348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.439835 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.439929 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.440261 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:17.440327 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:17.940004 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.940083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.940382 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.439649 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.939696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.980018 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:19.042087 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:19.045748 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.045786 1842604 retry.go:31] will retry after 3.780614615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:19.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.939337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.939666 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:19.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:20.439426 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.439516 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.439851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:20.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.939502 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.439268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.585043 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:21.648279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:21.648322 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.648342 1842604 retry.go:31] will retry after 5.326379112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.939713 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.940115 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:21.940177 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:22.439859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.439927 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.440196 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:22.826669 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:22.887724 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:22.891256 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.891291 1842604 retry.go:31] will retry after 7.007720529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.939466 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.939552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.939870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.439633 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.439715 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.440036 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.939677 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.939748 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.940008 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:24.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.440005 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.440343 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:24.440400 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:24.939690 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.939766 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.940068 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.439712 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.439799 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.440085 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.939947 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.940024 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.940358 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.439107 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.439185 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.939570 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:26.939627 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:26.975786 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:27.047539 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:27.047579 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.047598 1842604 retry.go:31] will retry after 10.416340882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.439244 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.439321 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:27.939345 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.939450 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.939785 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.439255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.439518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.939274 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.939371 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.939720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:28.939777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:29.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.899356 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:29.940020 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.940094 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.940346 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.975996 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:29.976895 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:29.976922 1842604 retry.go:31] will retry after 13.637319362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:30.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.439575 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:30.939293 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.939381 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:31.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:31.439575 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:31.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.439784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:33.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.439356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:33.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:33.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.439634 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.439714 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.439961 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.939579 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.939653 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:35.439846 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.439925 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.440292 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:35.440352 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:35.940006 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.940080 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.940335 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.440174 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.440258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.440580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.939312 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.939727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.439556 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.464839 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:37.535691 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:37.535727 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.535748 1842604 retry.go:31] will retry after 13.417840341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.939229 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:37.939658 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:38.439518 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.439602 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.439942 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:38.939665 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.939784 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.940059 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.440088 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.440162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.440456 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.939587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:40.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.439491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:40.439540 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:40.939209 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.939552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:42.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.439616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:42.439677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:42.939334 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.615150 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:43.680878 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:43.680928 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.680947 1842604 retry.go:31] will retry after 17.388789533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.939409 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:44.939567 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:45.439260 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.439687 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:45.939401 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.939486 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.939829 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:46.939686 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:47.439343 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.439416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:47.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.439361 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.439738 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:49.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:49.439525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:49.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.439605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.939264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.954020 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:51.020279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:51.020323 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.020343 1842604 retry.go:31] will retry after 13.418822402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.440005 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.440079 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.440420 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:51.440473 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:51.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.939608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.439380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:53.939608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:54.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.439246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:54.939220 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.439239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:55.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:56.439315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:56.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.939434 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.939515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:57.939910 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:58.439684 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.439750 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.440021 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:58.939803 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.939878 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.940159 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.440068 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.440142 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.440488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:00.439702 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:00.939370 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.939452 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.939786 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.070030 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:01.132180 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:01.132233 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.132254 1842604 retry.go:31] will retry after 31.549707812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.439619 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.439687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.439937 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.939692 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.939769 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.940100 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:02.439919 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.439992 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.440273 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:02.440317 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:02.939603 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.939682 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.939996 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.439752 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.439849 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.440174 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.939981 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.940061 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.940401 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.439336 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:04.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.517286 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:04.517332 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.517352 1842604 retry.go:31] will retry after 44.886251271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.939909 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.939982 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.940274 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:04.940323 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:05.440102 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.440174 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.440508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:05.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.939577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.439220 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.939309 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.939386 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:07.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:07.439570 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:07.939211 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.939296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.939674 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.439772 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.439857 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.440214 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.939551 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.939621 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.939875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:09.439806 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.439883 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.440225 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:09.440285 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:09.939882 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.939963 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.940293 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.439792 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.439859 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.440124 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.939880 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.939951 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.940239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:11.439938 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.440017 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.440352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:11.440408 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:11.939683 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.939756 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.940083 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.439886 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.439960 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.440334 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.939160 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.939264 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.939339 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:13.939773 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:14.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.439283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.939497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:16.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:16.439517 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:16.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.439317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.439673 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.939571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:18.439237 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.439312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:18.439701 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:18.939233 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.939653 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.939246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.939513 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:20.939554 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:21.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:21.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.439160 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.939658 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:22.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:23.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:23.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:25.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.439524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:25.439565 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:25.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.439242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.939226 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.939651 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:27.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:27.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:27.939251 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.939341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.939686 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.439565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.439293 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.939142 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.939475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:29.939523 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:30.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.439638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:30.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.439467 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:31.939621 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:32.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:32.683088 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:32.746941 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:32.746981 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.747001 1842604 retry.go:31] will retry after 33.271174209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.939435 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.939505 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.439398 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.439739 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.939479 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.939568 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.939898 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:33.939952 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:34.439808 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.439880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.440131 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:34.939954 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.940033 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.940352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.439094 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.439177 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.439525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:36.439299 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.439392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.439763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:36.439816 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:36.939492 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.939567 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.939939 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.439667 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.439746 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.440054 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.939849 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.939922 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.940282 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.439087 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.439168 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.439498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:38.939564 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:39.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:39.939340 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.439520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.939185 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:40.939662 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:41.439347 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.439746 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:41.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.939550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.939307 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.939384 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:42.939786 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:43.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:43.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.939249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.939486 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:45.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.439295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.439645 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:45.439707 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:45.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:47.439327 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.439404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.439724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:47.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:47.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.439273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:49.404519 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:49.440015 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.440083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.440326 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:49.440365 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:49.475362 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475396 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475480 1842604 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:51:49.939258 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.939331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.439273 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.439350 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.939510 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.939742 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:51.939799 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:52.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.439527 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:52.939172 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.439284 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.439357 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.439683 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:54.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.439675 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:54.439728 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:54.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.939602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.439136 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.439455 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.939488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:56.939531 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:57.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:57.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.439522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.939289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.939635 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:58.939699 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:59.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.439571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:59.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.939462 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.439330 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.439711 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:01.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:01.439533 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:01.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.439326 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.439405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.439723 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.939547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:03.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:03.439656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:03.939333 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.939982 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:04.439870 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.439950 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.441104 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 02:52:04.939902 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.939976 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.940342 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:05.439974 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.440051 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.440366 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:05.440422 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:05.939070 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.939144 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.939427 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.018765 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:52:06.088979 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089023 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089108 1842604 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:52:06.092116 1842604 out.go:179] * Enabled addons: 
	I1216 02:52:06.094111 1842604 addons.go:530] duration metric: took 1m58.134103468s for enable addons: enabled=[]
	I1216 02:52:06.439418 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.439511 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.439875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.939605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.439870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:07.939631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:08.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.439668 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:08.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:09.939712 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:10.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:10.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.439325 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:12.439170 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:12.439614 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:12.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:14.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.439618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:14.439675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:14.939160 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.939166 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.939565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:16.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:16.439751 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:16.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.939474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:18.439316 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.439414 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.439810 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:18.439875 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:18.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.439217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.939594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:20.939572 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:21.439266 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.439341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:21.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.939580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.439532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.939572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:22.939619 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:23.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:23.939140 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.439648 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.939341 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.939745 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:24.939803 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:25.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:25.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:27.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.439562 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:27.439608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:27.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:29.439248 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.439328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.439689 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:29.439742 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:29.939139 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.939549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.939397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.939726 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.439487 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.939215 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.939632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:31.939687 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:32.439388 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.439773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:32.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.939546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.939270 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.939356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:33.939787 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:34.439540 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.439618 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:34.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:36.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.439305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:36.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:36.939319 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.939408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.939776 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.439547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:38.439553 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.439631 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.439986 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:38.440048 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.939530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.939328 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:40.939636 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:41.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:41.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.439574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.939650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:42.939708 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:43.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:43.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.939585 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.439396 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.439479 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.439801 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:45.439245 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.439326 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.439625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:45.439674 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:45.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.939135 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:47.939618 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:48.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.439619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:48.939338 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.939771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.439504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:49.939679 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:50.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:50.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.439257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.939737 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:51.939789 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:52.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:52.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.939262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.439314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.939491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:54.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.439271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:54.439659 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:54.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.939623 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:56.439309 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.439391 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:56.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:56.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.939545 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:58.939617 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:59.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:59.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:00.939649 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:01.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:01.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.439351 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.439735 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.939471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:03.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:03.439650 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:03.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.939395 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.939207 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.939477 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:05.939529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:06.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.439240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:06.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.939724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.439508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.939335 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.939685 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:07.939739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:08.439494 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.439903 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:08.939686 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.939764 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.940063 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.440032 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.440108 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.440422 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:10.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:10.439589 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:10.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.439598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:12.439168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:12.439639 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:12.939298 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.939374 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:14.439387 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.439472 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.439812 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:14.439867 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:14.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.939501 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:16.939656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:17.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.439397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.439744 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:17.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.439237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.439576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.939765 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:18.939822 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:19.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:19.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.439182 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:21.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.439583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:21.439640 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:21.939297 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.939731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.439207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.939299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:23.439346 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.439421 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.439771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:23.439824 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:23.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.939483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.439299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.439632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.939319 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.439591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:25.939638 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:26.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:26.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.939259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:28.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:28.439529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:28.939176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.439313 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.439393 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.439725 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:30.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:30.439655 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:30.939320 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.939418 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.439475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.939534 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:32.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.439620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:32.439676 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:32.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.439245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:34.939677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:35.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:35.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.939263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.439228 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.939277 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.939377 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.939732 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:36.939794 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:37.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:37.939236 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.939314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.439417 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.439490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.439842 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:39.439409 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.439482 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.439830 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:39.439883 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:39.939560 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.939640 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.939973 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.439727 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.439800 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.440066 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.939896 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.939970 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.940284 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:41.440124 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.440201 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.440497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:41.440543 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:41.943162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.943240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.943561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.439267 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.439351 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.939415 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.939490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.939836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.439215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.439499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:43.939661 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:44.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:44.939247 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.939347 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.939662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.439436 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.439789 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.939376 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.939453 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.939756 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:45.939804 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:46.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.439210 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:46.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.939525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.439589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.939329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.939704 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:48.439754 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.439848 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.440188 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:48.440247 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:48.939992 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.940067 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.940363 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.439133 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.439200 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.939646 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.439384 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.439476 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:50.939521 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:51.939222 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.439241 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.439313 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.939395 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.939474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.939857 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:52.939915 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:53.439594 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.439672 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.440023 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:53.939720 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.940050 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.440097 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.440176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.440526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:55.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.439505 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:55.439556 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:55.939269 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.939344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.939702 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.439437 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.439519 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.439881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.939498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:57.439223 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.439303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:57.439739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:57.939423 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.939507 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.939912 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.439813 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.439885 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.440183 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.940058 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.940134 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.940478 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.439567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.939485 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:59.939525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:00.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.439472 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.939561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:01.939606 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:02.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:02.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.444773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1216 02:54:04.444839 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:04.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.939528 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.439610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:06.939675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:07.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.439235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:07.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.939834 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.439782 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.439866 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.440217 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.939622 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.939691 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.939951 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:08.939990 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:09.439901 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.439984 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.440340 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:09.940011 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.940412 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.440017 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.440085 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.440369 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.939087 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.939163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:11.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.439309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.439891 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:11.439947 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:11.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.939684 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.939946 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.439716 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.439790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.440122 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.939802 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.939880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.940191 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:13.439954 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.440025 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.440286 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:13.440326 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:13.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.939162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.439289 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.439670 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.939235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.439219 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.939184 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:15.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:16.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:16.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.939531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.939240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:18.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.439286 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.439581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:18.439631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:18.939322 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.939736 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.439493 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:20.439464 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.439552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.439931 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:20.439986 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:20.939676 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.939743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.940000 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.439750 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.439826 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.440155 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.939838 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.939912 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.940244 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:22.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.440003 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.440320 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:22.440370 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:22.940106 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.940183 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.940523 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.939141 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.439339 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.439743 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.939456 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.939535 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.939838 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:24.939885 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:25.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.439298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:25.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.939504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:27.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:27.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:27.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.439179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.439531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.939224 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.939682 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:29.439497 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:29.439924 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:29.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.939230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.939551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.939364 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.939698 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:31.939756 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:32.439434 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.439515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.439885 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:32.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.439177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.439263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.939412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:33.939818 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:34.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.439568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:34.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.939595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:36.439210 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.439644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:36.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:36.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.939300 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.939615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:38.439345 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.439428 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.439811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:38.439872 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:38.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.939515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.439608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.939380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.939747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.439481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:40.939620 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:41.439300 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.439721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:41.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.439561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.939360 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.939717 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:42.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:43.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.439482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:43.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.939158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:45.439421 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.439558 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.440302 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:45.440450 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:45.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.939642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.439549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.940079 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.940162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.940421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:47.940463 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.439275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:48.939330 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.939700 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.439553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:50.439334 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:50.439774 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:50.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.439563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.439158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.439503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:52.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:53.439238 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.439324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:53.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.939529 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.439450 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.439530 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.439887 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.939607 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.939685 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:54.940065 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:55.439530 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.439603 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.439907 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.439329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.439662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.939519 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:57.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.439641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:57.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:57.939371 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.939446 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.939781 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:59.939684 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:00.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.439758 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:00.939212 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.939641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:01.939706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:02.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.439808 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:02.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.439530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:04.439579 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:04.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.939672 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.439280 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.439751 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.939466 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:06.439161 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:06.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:06.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.939644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.439279 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.439353 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.439621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.939358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.939442 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.939811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:08.439810 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.439889 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.440239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:08.440291 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:08.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.939681 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.939929 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.439800 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.439881 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.440208 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.939526 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.939604 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.939943 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.439719 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.439792 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.440067 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.939812 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.939897 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.940243 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:10.940297 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:11.440018 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.440100 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.440421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:11.939675 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.939759 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.940020 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.439848 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.439919 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.440237 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.940077 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.940153 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.940509 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:12.940583 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:13.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:13.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.939610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.439597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.939566 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:15.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.439663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:15.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:15.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.939245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.939569 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.439533 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:17.439324 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:17.439815 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:17.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.939524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.439468 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:19.939616 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:20.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.439548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:20.939137 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.939207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.939465 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.439657 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:21.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:22.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.439535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.439305 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.439389 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.439720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.939542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:24.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:24.439651 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:24.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.939598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.439463 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.439089 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.439163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:26.939524 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:27.439214 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.439661 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:27.939373 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.939451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.939805 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.939221 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.939614 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:28.939667 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:29.439402 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.439474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.439787 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:29.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.439272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.939323 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.939401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.939721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:30.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:31.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:31.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.939328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.939663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.439376 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.439451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.439836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.939913 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.939990 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.940359 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:32.940407 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:33.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.439212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:33.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.439249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.439588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:35.439211 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:35.439710 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:35.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.939499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:37.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:38.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.439750 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:38.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.439559 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:40.439611 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:40.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.939558 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.439280 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:42.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:42.439668 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:42.939362 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.939437 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.939760 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.439213 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:44.439226 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:44.439691 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:44.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.439572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.939300 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.939383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.939237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:46.939647 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:47.439293 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.439366 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.439696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:47.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.939541 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.939238 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.939317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:49.439174 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.439543 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:49.439584 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:49.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.439290 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.943201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.943274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.943612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:51.439328 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:51.439790 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:51.939213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.939297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.939626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.439526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.939601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.439284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.939494 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:53.939534 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:54.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.439304 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:54.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.939622 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.439163 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.439516 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.939231 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.939312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.939665 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:55.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:56.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.439461 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.439770 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:56.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.439287 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.439372 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.939469 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.939545 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.939881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:57.939934 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:58.439663 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.439743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.440003 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:58.939830 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.939903 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.940228 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.439135 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.939583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:00.439256 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.439337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.439709 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:00.439775 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:00.939210 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.439602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:02.939560 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:03.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.439540 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:03.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.939867 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:04.939916 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:05.439564 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.439639 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.439983 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:05.939766 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.939842 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.940108 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.439869 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.439941 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.440295 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.940114 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.940198 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.940608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:06.940665 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:07.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:07.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.939618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.439510 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:08.439620 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:08.440275 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.939119 1842604 node_ready.go:38] duration metric: took 6m0.000151723s for node "functional-389759" to be "Ready" ...
	I1216 02:56:08.942443 1842604 out.go:203] 
	W1216 02:56:08.945313 1842604 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 02:56:08.945510 1842604 out.go:285] * 
	W1216 02:56:08.947818 1842604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:56:08.950773 1842604 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582071213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582085769Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582153534Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582172118Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582182621Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582193755Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582203732Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582218985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582235043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582268732Z" level=info msg="Connect containerd service"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.582549853Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.583245998Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.601954104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.602161822Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.602278620Z" level=info msg="Start recovering state"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.602242067Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638073037Z" level=info msg="Start event monitor"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638125656Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638137889Z" level=info msg="Start streaming server"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638147555Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638156260Z" level=info msg="runtime interface starting up..."
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638163350Z" level=info msg="starting plugins..."
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.638176322Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:50:06 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 02:50:06 functional-389759 containerd[5252]: time="2025-12-16T02:50:06.639112317Z" level=info msg="containerd successfully booted in 0.078687s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:56:13.231117    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:13.231979    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:13.233641    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:13.233986    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:13.235537    8600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:56:13 up  8:38,  0 user,  load average: 0.43, 0.31, 0.82
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 02:56:09 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:10 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 16 02:56:10 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:10 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:10 functional-389759 kubelet[8379]: E1216 02:56:10.497579    8379 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:10 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:10 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:11 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 16 02:56:11 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:11 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:11 functional-389759 kubelet[8478]: E1216 02:56:11.247921    8478 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:11 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:11 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:11 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 16 02:56:11 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:11 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:12 functional-389759 kubelet[8499]: E1216 02:56:12.001093    8499 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:12 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:12 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:12 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 16 02:56:12 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:12 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:12 functional-389759 kubelet[8519]: E1216 02:56:12.759743    8519 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:12 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:12 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (399.159559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 kubectl -- --context functional-389759 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 kubectl -- --context functional-389759 get pods: exit status 1 (109.609814ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-389759 kubectl -- --context functional-389759 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (309.237749ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-853651 image ls --format json --alsologtostderr                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format table --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls                                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ delete         │ -p functional-853651                                                                                                                                    │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-389759 --alsologtostderr -v=8                                                                                                             │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:50 UTC │                     │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:latest                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add minikube-local-cache-test:functional-389759                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache delete minikube-local-cache-test:functional-389759                                                                              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl images                                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	│ cache          │ functional-389759 cache reload                                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ kubectl        │ functional-389759 kubectl -- --context functional-389759 get pods                                                                                       │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:50:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:50:03.940449 1842604 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:50:03.940640 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.940666 1842604 out.go:374] Setting ErrFile to fd 2...
	I1216 02:50:03.940685 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.941001 1842604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:50:03.941424 1842604 out.go:368] Setting JSON to false
	I1216 02:50:03.942302 1842604 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30748,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:50:03.942395 1842604 start.go:143] virtualization:  
	I1216 02:50:03.948050 1842604 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:50:03.951289 1842604 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:50:03.951381 1842604 notify.go:221] Checking for updates...
	I1216 02:50:03.954734 1842604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:50:03.957600 1842604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:03.960611 1842604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:50:03.963508 1842604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:50:03.966329 1842604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:50:03.969672 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:03.969806 1842604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:50:04.007031 1842604 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:50:04.007241 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.073702 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.062313817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.073812 1842604 docker.go:319] overlay module found
	I1216 02:50:04.077006 1842604 out.go:179] * Using the docker driver based on existing profile
	I1216 02:50:04.079902 1842604 start.go:309] selected driver: docker
	I1216 02:50:04.079932 1842604 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.080054 1842604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:50:04.080179 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.136011 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.126842192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.136427 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:04.136482 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:04.136533 1842604 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.139723 1842604 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:50:04.142545 1842604 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:50:04.145565 1842604 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:50:04.148399 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:04.148453 1842604 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:50:04.148469 1842604 cache.go:65] Caching tarball of preloaded images
	I1216 02:50:04.148474 1842604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:50:04.148567 1842604 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:50:04.148577 1842604 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:50:04.148682 1842604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:50:04.168498 1842604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:50:04.168522 1842604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:50:04.168544 1842604 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:50:04.168575 1842604 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:50:04.168643 1842604 start.go:364] duration metric: took 46.539µs to acquireMachinesLock for "functional-389759"
	I1216 02:50:04.168667 1842604 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:50:04.168673 1842604 fix.go:54] fixHost starting: 
	I1216 02:50:04.168962 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:04.192862 1842604 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:50:04.192891 1842604 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:50:04.196202 1842604 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:50:04.196246 1842604 machine.go:94] provisionDockerMachine start ...
	I1216 02:50:04.196329 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.213973 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.214316 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.214325 1842604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:50:04.350600 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.350628 1842604 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:50:04.350691 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.368974 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.369299 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.369316 1842604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:50:04.513062 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.513215 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.531552 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.531870 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.531893 1842604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:50:04.663498 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:50:04.663573 1842604 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:50:04.663612 1842604 ubuntu.go:190] setting up certificates
	I1216 02:50:04.663658 1842604 provision.go:84] configureAuth start
	I1216 02:50:04.663756 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:04.681830 1842604 provision.go:143] copyHostCerts
	I1216 02:50:04.681871 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681914 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:50:04.681921 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681996 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:50:04.682080 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682098 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:50:04.682107 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682134 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:50:04.682171 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682188 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:50:04.682192 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682218 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:50:04.682263 1842604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:50:04.918732 1842604 provision.go:177] copyRemoteCerts
	I1216 02:50:04.918803 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:50:04.918909 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.945401 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.043237 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 02:50:05.043301 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:50:05.061641 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 02:50:05.061702 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:50:05.079841 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 02:50:05.079956 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:50:05.097722 1842604 provision.go:87] duration metric: took 434.019439ms to configureAuth
	I1216 02:50:05.097754 1842604 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:50:05.097953 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:05.097967 1842604 machine.go:97] duration metric: took 901.714132ms to provisionDockerMachine
	I1216 02:50:05.097975 1842604 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:50:05.097987 1842604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:50:05.098051 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:50:05.098102 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.115383 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.211319 1842604 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:50:05.214768 1842604 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 02:50:05.214793 1842604 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 02:50:05.214797 1842604 command_runner.go:130] > VERSION_ID="12"
	I1216 02:50:05.214802 1842604 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 02:50:05.214807 1842604 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 02:50:05.214810 1842604 command_runner.go:130] > ID=debian
	I1216 02:50:05.214815 1842604 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 02:50:05.214820 1842604 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 02:50:05.214826 1842604 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 02:50:05.214871 1842604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:50:05.214894 1842604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:50:05.214911 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:50:05.214973 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:50:05.215088 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:50:05.215101 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /etc/ssl/certs/17983702.pem
	I1216 02:50:05.215203 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:50:05.215211 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> /etc/test/nested/copy/1798370/hosts
	I1216 02:50:05.215287 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:50:05.223273 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:05.241790 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:50:05.259718 1842604 start.go:296] duration metric: took 161.727689ms for postStartSetup
	I1216 02:50:05.259801 1842604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:50:05.259846 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.277760 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.371870 1842604 command_runner.go:130] > 18%
	I1216 02:50:05.372496 1842604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:50:05.377201 1842604 command_runner.go:130] > 161G
	I1216 02:50:05.377708 1842604 fix.go:56] duration metric: took 1.209030723s for fixHost
	I1216 02:50:05.377728 1842604 start.go:83] releasing machines lock for "functional-389759", held for 1.209073027s
	I1216 02:50:05.377811 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:05.395427 1842604 ssh_runner.go:195] Run: cat /version.json
	I1216 02:50:05.395497 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.395795 1842604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:50:05.395856 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.414621 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.417076 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.510754 1842604 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 02:50:05.510902 1842604 ssh_runner.go:195] Run: systemctl --version
	I1216 02:50:05.609923 1842604 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 02:50:05.612841 1842604 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 02:50:05.612896 1842604 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 02:50:05.613034 1842604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 02:50:05.617736 1842604 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 02:50:05.617774 1842604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:50:05.617838 1842604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:50:05.626000 1842604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:50:05.626028 1842604 start.go:496] detecting cgroup driver to use...
	I1216 02:50:05.626059 1842604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:50:05.626109 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:50:05.644077 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:50:05.659636 1842604 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:50:05.659709 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:50:05.676805 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:50:05.692573 1842604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:50:05.816755 1842604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:50:05.944883 1842604 docker.go:234] disabling docker service ...
	I1216 02:50:05.944952 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:50:05.960111 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:50:05.973273 1842604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:50:06.102700 1842604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:50:06.226099 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:50:06.239914 1842604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:50:06.254235 1842604 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 02:50:06.255720 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:50:06.265881 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:50:06.274988 1842604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:50:06.275099 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:50:06.284319 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.293767 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:50:06.302914 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.312051 1842604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:50:06.320364 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:50:06.329464 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:50:06.338574 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:50:06.347623 1842604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:50:06.354520 1842604 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 02:50:06.355609 1842604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:50:06.363468 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:06.501216 1842604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:50:06.641570 1842604 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:50:06.641646 1842604 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:50:06.645599 1842604 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1216 02:50:06.645623 1842604 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 02:50:06.645629 1842604 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1216 02:50:06.645636 1842604 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:06.645642 1842604 command_runner.go:130] > Access: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645647 1842604 command_runner.go:130] > Modify: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645652 1842604 command_runner.go:130] > Change: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645656 1842604 command_runner.go:130] >  Birth: -
	I1216 02:50:06.645685 1842604 start.go:564] Will wait 60s for crictl version
	I1216 02:50:06.645740 1842604 ssh_runner.go:195] Run: which crictl
	I1216 02:50:06.649139 1842604 command_runner.go:130] > /usr/local/bin/crictl
	I1216 02:50:06.649430 1842604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:50:06.676623 1842604 command_runner.go:130] > Version:  0.1.0
	I1216 02:50:06.676645 1842604 command_runner.go:130] > RuntimeName:  containerd
	I1216 02:50:06.676661 1842604 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1216 02:50:06.676671 1842604 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 02:50:06.676683 1842604 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:50:06.676740 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.701508 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.703452 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.721412 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.729453 1842604 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:50:06.732626 1842604 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:50:06.754519 1842604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:50:06.758684 1842604 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 02:50:06.758798 1842604 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:50:06.758921 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:06.758993 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.783030 1842604 command_runner.go:130] > {
	I1216 02:50:06.783088 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.783093 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783103 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.783109 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783114 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.783117 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783121 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783130 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.783133 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783138 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.783142 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783146 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783149 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783152 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783160 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.783163 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783169 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.783172 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783176 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783185 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.783188 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783192 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.783196 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783200 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783204 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783207 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783214 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.783224 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783229 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.783232 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783243 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783251 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.783254 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783258 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.783262 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.783266 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783269 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783272 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783278 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.783282 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783287 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.783290 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783294 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783305 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.783308 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783312 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.783317 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783321 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783324 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783328 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783332 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783337 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783340 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783347 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.783351 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.783359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783363 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783370 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.783374 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783381 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.783384 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783392 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783395 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783399 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783403 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783406 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783409 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783415 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.783419 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783424 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.783427 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783431 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783439 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.783442 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783446 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.783450 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783454 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783457 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783461 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783464 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783467 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783470 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783476 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.783480 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783485 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.783488 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783492 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783499 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.783503 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783506 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.783510 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783514 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783520 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783524 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783530 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.783534 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783540 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.783543 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783546 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783554 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.783557 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.783568 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783572 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783575 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783579 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783582 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783585 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783588 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783595 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.783599 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783604 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.783608 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783611 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783619 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.783622 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783625 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.783629 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783633 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.783636 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783639 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783643 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.783646 1842604 command_runner.go:130] >     }
	I1216 02:50:06.783648 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.783651 1842604 command_runner.go:130] > }
	I1216 02:50:06.785559 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.785577 1842604 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:50:06.785637 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.809062 1842604 command_runner.go:130] > {
	I1216 02:50:06.809080 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.809085 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809094 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.809099 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809105 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.809108 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809112 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809121 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.809125 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809129 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.809133 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809137 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809140 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809143 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809153 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.809157 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809162 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.809166 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809170 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809178 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.809181 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809186 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.809189 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809193 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809196 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809199 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809207 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.809211 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809216 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.809219 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809226 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809235 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.809241 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809246 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.809250 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.809254 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809257 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809260 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809267 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.809270 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809276 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.809279 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809283 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809291 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.809294 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809298 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.809303 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809307 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809311 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809315 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809318 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809322 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809325 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809332 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.809335 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809341 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.809344 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809348 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.809359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809364 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.809367 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809379 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809382 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809386 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809393 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809396 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809399 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809406 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.809410 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809416 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.809419 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809423 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809432 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.809435 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809439 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.809443 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809447 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809450 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809453 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809461 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809464 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809467 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809475 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.809478 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809483 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.809486 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809490 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809498 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.809501 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809505 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.809509 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809513 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809516 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809519 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809526 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.809530 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809535 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.809541 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809545 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809553 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.809556 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.809564 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809568 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809571 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809575 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809579 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809582 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809585 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809591 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.809595 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809599 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.809602 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809606 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809614 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.809616 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809620 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.809624 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809627 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.809632 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809635 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809639 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.809642 1842604 command_runner.go:130] >     }
	I1216 02:50:06.809645 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.809648 1842604 command_runner.go:130] > }
	I1216 02:50:06.811271 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.811300 1842604 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:50:06.811308 1842604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:50:06.811452 1842604 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:50:06.811544 1842604 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:50:06.842167 1842604 command_runner.go:130] > {
	I1216 02:50:06.842188 1842604 command_runner.go:130] >   "cniconfig": {
	I1216 02:50:06.842194 1842604 command_runner.go:130] >     "Networks": [
	I1216 02:50:06.842198 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842203 1842604 command_runner.go:130] >         "Config": {
	I1216 02:50:06.842208 1842604 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1216 02:50:06.842213 1842604 command_runner.go:130] >           "Name": "cni-loopback",
	I1216 02:50:06.842217 1842604 command_runner.go:130] >           "Plugins": [
	I1216 02:50:06.842220 1842604 command_runner.go:130] >             {
	I1216 02:50:06.842224 1842604 command_runner.go:130] >               "Network": {
	I1216 02:50:06.842229 1842604 command_runner.go:130] >                 "ipam": {},
	I1216 02:50:06.842234 1842604 command_runner.go:130] >                 "type": "loopback"
	I1216 02:50:06.842238 1842604 command_runner.go:130] >               },
	I1216 02:50:06.842243 1842604 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1216 02:50:06.842246 1842604 command_runner.go:130] >             }
	I1216 02:50:06.842249 1842604 command_runner.go:130] >           ],
	I1216 02:50:06.842259 1842604 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1216 02:50:06.842263 1842604 command_runner.go:130] >         },
	I1216 02:50:06.842268 1842604 command_runner.go:130] >         "IFName": "lo"
	I1216 02:50:06.842271 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842275 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842279 1842604 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1216 02:50:06.842283 1842604 command_runner.go:130] >     "PluginDirs": [
	I1216 02:50:06.842287 1842604 command_runner.go:130] >       "/opt/cni/bin"
	I1216 02:50:06.842291 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842298 1842604 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1216 02:50:06.842301 1842604 command_runner.go:130] >     "Prefix": "eth"
	I1216 02:50:06.842304 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842308 1842604 command_runner.go:130] >   "config": {
	I1216 02:50:06.842312 1842604 command_runner.go:130] >     "cdiSpecDirs": [
	I1216 02:50:06.842315 1842604 command_runner.go:130] >       "/etc/cdi",
	I1216 02:50:06.842320 1842604 command_runner.go:130] >       "/var/run/cdi"
	I1216 02:50:06.842328 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842331 1842604 command_runner.go:130] >     "cni": {
	I1216 02:50:06.842335 1842604 command_runner.go:130] >       "binDir": "",
	I1216 02:50:06.842338 1842604 command_runner.go:130] >       "binDirs": [
	I1216 02:50:06.842342 1842604 command_runner.go:130] >         "/opt/cni/bin"
	I1216 02:50:06.842345 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.842349 1842604 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1216 02:50:06.842352 1842604 command_runner.go:130] >       "confTemplate": "",
	I1216 02:50:06.842356 1842604 command_runner.go:130] >       "ipPref": "",
	I1216 02:50:06.842359 1842604 command_runner.go:130] >       "maxConfNum": 1,
	I1216 02:50:06.842364 1842604 command_runner.go:130] >       "setupSerially": false,
	I1216 02:50:06.842368 1842604 command_runner.go:130] >       "useInternalLoopback": false
	I1216 02:50:06.842371 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842378 1842604 command_runner.go:130] >     "containerd": {
	I1216 02:50:06.842382 1842604 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1216 02:50:06.842387 1842604 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1216 02:50:06.842392 1842604 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1216 02:50:06.842396 1842604 command_runner.go:130] >       "runtimes": {
	I1216 02:50:06.842399 1842604 command_runner.go:130] >         "runc": {
	I1216 02:50:06.842404 1842604 command_runner.go:130] >           "ContainerAnnotations": null,
	I1216 02:50:06.842415 1842604 command_runner.go:130] >           "PodAnnotations": null,
	I1216 02:50:06.842421 1842604 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1216 02:50:06.842425 1842604 command_runner.go:130] >           "cgroupWritable": false,
	I1216 02:50:06.842429 1842604 command_runner.go:130] >           "cniConfDir": "",
	I1216 02:50:06.842433 1842604 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1216 02:50:06.842436 1842604 command_runner.go:130] >           "io_type": "",
	I1216 02:50:06.842439 1842604 command_runner.go:130] >           "options": {
	I1216 02:50:06.842443 1842604 command_runner.go:130] >             "BinaryName": "",
	I1216 02:50:06.842448 1842604 command_runner.go:130] >             "CriuImagePath": "",
	I1216 02:50:06.842451 1842604 command_runner.go:130] >             "CriuWorkPath": "",
	I1216 02:50:06.842455 1842604 command_runner.go:130] >             "IoGid": 0,
	I1216 02:50:06.842458 1842604 command_runner.go:130] >             "IoUid": 0,
	I1216 02:50:06.842462 1842604 command_runner.go:130] >             "NoNewKeyring": false,
	I1216 02:50:06.842469 1842604 command_runner.go:130] >             "Root": "",
	I1216 02:50:06.842473 1842604 command_runner.go:130] >             "ShimCgroup": "",
	I1216 02:50:06.842480 1842604 command_runner.go:130] >             "SystemdCgroup": false
	I1216 02:50:06.842483 1842604 command_runner.go:130] >           },
	I1216 02:50:06.842488 1842604 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1216 02:50:06.842494 1842604 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1216 02:50:06.842499 1842604 command_runner.go:130] >           "runtimePath": "",
	I1216 02:50:06.842504 1842604 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1216 02:50:06.842508 1842604 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1216 02:50:06.842512 1842604 command_runner.go:130] >           "snapshotter": ""
	I1216 02:50:06.842515 1842604 command_runner.go:130] >         }
	I1216 02:50:06.842518 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842521 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842530 1842604 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1216 02:50:06.842535 1842604 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1216 02:50:06.842541 1842604 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1216 02:50:06.842546 1842604 command_runner.go:130] >     "disableApparmor": false,
	I1216 02:50:06.842550 1842604 command_runner.go:130] >     "disableHugetlbController": true,
	I1216 02:50:06.842554 1842604 command_runner.go:130] >     "disableProcMount": false,
	I1216 02:50:06.842558 1842604 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1216 02:50:06.842562 1842604 command_runner.go:130] >     "enableCDI": true,
	I1216 02:50:06.842565 1842604 command_runner.go:130] >     "enableSelinux": false,
	I1216 02:50:06.842569 1842604 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1216 02:50:06.842573 1842604 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1216 02:50:06.842578 1842604 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1216 02:50:06.842582 1842604 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1216 02:50:06.842586 1842604 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1216 02:50:06.842590 1842604 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1216 02:50:06.842595 1842604 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1216 02:50:06.842600 1842604 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842604 1842604 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1216 02:50:06.842610 1842604 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842614 1842604 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1216 02:50:06.842622 1842604 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1216 02:50:06.842625 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842628 1842604 command_runner.go:130] >   "features": {
	I1216 02:50:06.842632 1842604 command_runner.go:130] >     "supplemental_groups_policy": true
	I1216 02:50:06.842635 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842639 1842604 command_runner.go:130] >   "golang": "go1.24.9",
	I1216 02:50:06.842649 1842604 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842658 1842604 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842662 1842604 command_runner.go:130] >   "runtimeHandlers": [
	I1216 02:50:06.842665 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842668 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842672 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842676 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842679 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842682 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842685 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842688 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842693 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842697 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842700 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842703 1842604 command_runner.go:130] >       "name": "runc"
	I1216 02:50:06.842706 1842604 command_runner.go:130] >     }
	I1216 02:50:06.842709 1842604 command_runner.go:130] >   ],
	I1216 02:50:06.842713 1842604 command_runner.go:130] >   "status": {
	I1216 02:50:06.842716 1842604 command_runner.go:130] >     "conditions": [
	I1216 02:50:06.842719 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842723 1842604 command_runner.go:130] >         "message": "",
	I1216 02:50:06.842730 1842604 command_runner.go:130] >         "reason": "",
	I1216 02:50:06.842734 1842604 command_runner.go:130] >         "status": true,
	I1216 02:50:06.842739 1842604 command_runner.go:130] >         "type": "RuntimeReady"
	I1216 02:50:06.842742 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842745 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842756 1842604 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1216 02:50:06.842764 1842604 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1216 02:50:06.842775 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842779 1842604 command_runner.go:130] >         "type": "NetworkReady"
	I1216 02:50:06.842782 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842785 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842816 1842604 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1216 02:50:06.842831 1842604 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1216 02:50:06.842837 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842848 1842604 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1216 02:50:06.842852 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842855 1842604 command_runner.go:130] >     ]
	I1216 02:50:06.842857 1842604 command_runner.go:130] >   }
	I1216 02:50:06.842860 1842604 command_runner.go:130] > }
	I1216 02:50:06.845895 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:06.845921 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:06.845936 1842604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:50:06.845966 1842604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:50:06.846165 1842604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:50:06.846270 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:50:06.854737 1842604 command_runner.go:130] > kubeadm
	I1216 02:50:06.854757 1842604 command_runner.go:130] > kubectl
	I1216 02:50:06.854762 1842604 command_runner.go:130] > kubelet
	I1216 02:50:06.854790 1842604 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:50:06.854884 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:50:06.863474 1842604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:50:06.877235 1842604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:50:06.893176 1842604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 02:50:06.907542 1842604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:50:06.911554 1842604 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 02:50:06.912008 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.032285 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:07.187841 1842604 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:50:07.187907 1842604 certs.go:195] generating shared ca certs ...
	I1216 02:50:07.187938 1842604 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.188113 1842604 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:50:07.188262 1842604 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:50:07.188293 1842604 certs.go:257] generating profile certs ...
	I1216 02:50:07.188479 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:50:07.188626 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:50:07.188704 1842604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:50:07.188746 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 02:50:07.188833 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 02:50:07.188865 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 02:50:07.188913 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 02:50:07.188955 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 02:50:07.188991 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 02:50:07.189039 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 02:50:07.189094 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 02:50:07.189217 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:50:07.189294 1842604 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:50:07.189332 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:50:07.189413 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:50:07.189488 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:50:07.189568 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:50:07.189665 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:07.189733 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.189792 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem -> /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.189829 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.192734 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:50:07.215749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:50:07.236395 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:50:07.256110 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:50:07.276540 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:50:07.296274 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:50:07.314749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:50:07.333206 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:50:07.351818 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:50:07.370275 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:50:07.390851 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:50:07.409219 1842604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:50:07.421911 1842604 ssh_runner.go:195] Run: openssl version
	I1216 02:50:07.427966 1842604 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 02:50:07.428408 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.436062 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:50:07.443738 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447498 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447742 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447801 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.490768 1842604 command_runner.go:130] > b5213941
	I1216 02:50:07.491273 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:50:07.498894 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.506703 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:50:07.514440 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518338 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518429 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518508 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.559640 1842604 command_runner.go:130] > 51391683
	I1216 02:50:07.560095 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:50:07.567522 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.574982 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:50:07.582626 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586721 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586817 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586878 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.628240 1842604 command_runner.go:130] > 3ec20f2e
	I1216 02:50:07.628688 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:50:07.636367 1842604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640270 1842604 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640300 1842604 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 02:50:07.640307 1842604 command_runner.go:130] > Device: 259,1	Inode: 2346079     Links: 1
	I1216 02:50:07.640313 1842604 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:07.640320 1842604 command_runner.go:130] > Access: 2025-12-16 02:45:59.904024015 +0000
	I1216 02:50:07.640326 1842604 command_runner.go:130] > Modify: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640331 1842604 command_runner.go:130] > Change: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640338 1842604 command_runner.go:130] >  Birth: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640415 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:50:07.685787 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.686316 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:50:07.726862 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.727358 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:50:07.769278 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.769775 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:50:07.810792 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.811300 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:50:07.852245 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.852345 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:50:07.894213 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.894706 1842604 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:07.894832 1842604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:50:07.894910 1842604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:50:07.922900 1842604 cri.go:89] found id: ""
	I1216 02:50:07.922983 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:50:07.930226 1842604 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 02:50:07.930256 1842604 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 02:50:07.930263 1842604 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 02:50:07.931439 1842604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:50:07.931499 1842604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:50:07.931562 1842604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:50:07.943740 1842604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:50:07.944155 1842604 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.944257 1842604 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389759" cluster setting kubeconfig missing "functional-389759" context setting]
	I1216 02:50:07.944564 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.945009 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.945157 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:07.945886 1842604 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 02:50:07.945970 1842604 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 02:50:07.945985 1842604 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 02:50:07.945994 1842604 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 02:50:07.946007 1842604 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 02:50:07.946011 1842604 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 02:50:07.946339 1842604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:50:07.958263 1842604 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 02:50:07.958306 1842604 kubeadm.go:602] duration metric: took 26.787333ms to restartPrimaryControlPlane
	I1216 02:50:07.958316 1842604 kubeadm.go:403] duration metric: took 63.631777ms to StartCluster
	I1216 02:50:07.958333 1842604 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.958427 1842604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.959238 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.959525 1842604 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 02:50:07.959950 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:07.960006 1842604 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 02:50:07.960112 1842604 addons.go:70] Setting storage-provisioner=true in profile "functional-389759"
	I1216 02:50:07.960129 1842604 addons.go:239] Setting addon storage-provisioner=true in "functional-389759"
	I1216 02:50:07.960152 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:07.960945 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.961166 1842604 addons.go:70] Setting default-storageclass=true in profile "functional-389759"
	I1216 02:50:07.961188 1842604 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389759"
	I1216 02:50:07.961453 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.966091 1842604 out.go:179] * Verifying Kubernetes components...
	I1216 02:50:07.968861 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.999405 1842604 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:50:08.003951 1842604 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.003988 1842604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:50:08.004070 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.016743 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:08.016935 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:08.017229 1842604 addons.go:239] Setting addon default-storageclass=true in "functional-389759"
	I1216 02:50:08.017278 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:08.017759 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:08.056545 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.063547 1842604 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.063573 1842604 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:50:08.063643 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.096801 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.182820 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:08.204300 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.216429 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.938921 1842604 node_ready.go:35] waiting up to 6m0s for node "functional-389759" to be "Ready" ...
	I1216 02:50:08.939066 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:08.939127 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939127 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1216 02:50:08.939303 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939364 1842604 retry.go:31] will retry after 371.599151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939463 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:08.939655 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939681 1842604 retry.go:31] will retry after 208.586178ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.149421 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.213240 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.213284 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.213304 1842604 retry.go:31] will retry after 201.914515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.311585 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.373333 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.373376 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.373396 1842604 retry.go:31] will retry after 439.688248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.415509 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.483422 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.483469 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.483489 1842604 retry.go:31] will retry after 841.778226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.814006 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.876109 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.880285 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.880327 1842604 retry.go:31] will retry after 574.892877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.939502 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.939583 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.939923 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.325447 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:10.394946 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.394995 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.395014 1842604 retry.go:31] will retry after 1.198470662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.439106 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.439176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.439428 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.455825 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:10.523765 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.523815 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.523848 1842604 retry.go:31] will retry after 636.325982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.939367 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:10.939781 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:11.161191 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:11.242833 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.242908 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.242943 1842604 retry.go:31] will retry after 1.140424726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.439654 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:11.594053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:11.649408 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.653163 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.653194 1842604 retry.go:31] will retry after 1.344955883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.939594 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.939687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.940009 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.383614 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:12.439264 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.440165 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:12.443835 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.443867 1842604 retry.go:31] will retry after 2.819298169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.939234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.939324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.999127 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:13.066096 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:13.066142 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.066177 1842604 retry.go:31] will retry after 2.29209329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.439591 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.439676 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.440017 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:13.440078 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:13.939859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.939946 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.940333 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.439599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:15.264053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:15.323662 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.327080 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.327109 1842604 retry.go:31] will retry after 3.65241611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.359324 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:15.421588 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.421635 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.421654 1842604 retry.go:31] will retry after 1.62104706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.439778 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.439879 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.440170 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:15.440216 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:15.940008 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.940410 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.440078 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.440155 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.440450 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:17.043912 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:17.099707 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:17.103362 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.103395 1842604 retry.go:31] will retry after 4.481188348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.439835 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.439929 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.440261 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:17.440327 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:17.940004 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.940083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.940382 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.439649 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.939696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.980018 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:19.042087 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:19.045748 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.045786 1842604 retry.go:31] will retry after 3.780614615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:19.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.939337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.939666 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:19.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:20.439426 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.439516 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.439851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:20.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.939502 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.439268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.585043 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:21.648279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:21.648322 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.648342 1842604 retry.go:31] will retry after 5.326379112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.939713 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.940115 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:21.940177 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:22.439859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.439927 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.440196 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:22.826669 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:22.887724 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:22.891256 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.891291 1842604 retry.go:31] will retry after 7.007720529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.939466 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.939552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.939870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.439633 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.439715 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.440036 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.939677 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.939748 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.940008 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:24.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.440005 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.440343 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:24.440400 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:24.939690 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.939766 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.940068 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.439712 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.439799 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.440085 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.939947 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.940024 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.940358 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.439107 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.439185 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.939570 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:26.939627 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:26.975786 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:27.047539 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:27.047579 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.047598 1842604 retry.go:31] will retry after 10.416340882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.439244 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.439321 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:27.939345 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.939450 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.939785 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.439255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.439518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.939274 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.939371 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.939720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:28.939777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:29.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.899356 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:29.940020 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.940094 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.940346 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.975996 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:29.976895 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:29.976922 1842604 retry.go:31] will retry after 13.637319362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:30.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.439575 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:30.939293 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.939381 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:31.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:31.439575 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:31.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.439784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:33.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.439356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:33.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:33.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.439634 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.439714 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.439961 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.939579 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.939653 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:35.439846 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.439925 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.440292 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:35.440352 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:35.940006 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.940080 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.940335 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.440174 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.440258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.440580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.939312 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.939727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.439556 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.464839 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:37.535691 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:37.535727 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.535748 1842604 retry.go:31] will retry after 13.417840341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.939229 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:37.939658 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:38.439518 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.439602 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.439942 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:38.939665 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.939784 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.940059 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.440088 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.440162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.440456 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.939587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:40.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.439491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:40.439540 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:40.939209 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.939552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:42.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.439616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:42.439677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:42.939334 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.615150 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:43.680878 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:43.680928 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.680947 1842604 retry.go:31] will retry after 17.388789533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.939409 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:44.939567 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:45.439260 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.439687 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:45.939401 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.939486 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.939829 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:46.939686 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:47.439343 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.439416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:47.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.439361 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.439738 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:49.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:49.439525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:49.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.439605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.939264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.954020 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:51.020279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:51.020323 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.020343 1842604 retry.go:31] will retry after 13.418822402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.440005 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.440079 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.440420 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:51.440473 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:51.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.939608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.439380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:53.939608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:54.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.439246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:54.939220 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.439239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:55.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:56.439315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:56.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.939434 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.939515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:57.939910 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:58.439684 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.439750 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.440021 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:58.939803 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.939878 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.940159 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.440068 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.440142 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.440488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:00.439702 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:00.939370 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.939452 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.939786 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.070030 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:01.132180 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:01.132233 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.132254 1842604 retry.go:31] will retry after 31.549707812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.439619 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.439687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.439937 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.939692 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.939769 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.940100 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:02.439919 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.439992 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.440273 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:02.440317 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:02.939603 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.939682 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.939996 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.439752 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.439849 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.440174 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.939981 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.940061 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.940401 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.439336 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:04.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.517286 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:04.517332 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.517352 1842604 retry.go:31] will retry after 44.886251271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.939909 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.939982 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.940274 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:04.940323 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:05.440102 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.440174 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.440508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:05.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.939577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.439220 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.939309 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.939386 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:07.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:07.439570 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:07.939211 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.939296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.939674 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.439772 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.439857 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.440214 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.939551 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.939621 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.939875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:09.439806 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.439883 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.440225 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:09.440285 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:09.939882 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.939963 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.940293 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.439792 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.439859 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.440124 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.939880 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.939951 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.940239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:11.439938 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.440017 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.440352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:11.440408 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:11.939683 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.939756 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.940083 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.439886 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.439960 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.440334 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.939160 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.939264 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.939339 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:13.939773 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:14.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.439283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.939497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:16.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:16.439517 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:16.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.439317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.439673 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.939571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:18.439237 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.439312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:18.439701 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:18.939233 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.939653 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.939246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.939513 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:20.939554 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:21.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:21.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.439160 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.939658 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:22.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:23.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:23.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:25.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.439524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:25.439565 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:25.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.439242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.939226 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.939651 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:27.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:27.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:27.939251 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.939341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.939686 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.439565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.439293 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.939142 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.939475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:29.939523 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:30.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.439638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:30.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.439467 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:31.939621 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:32.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:32.683088 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:32.746941 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:32.746981 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.747001 1842604 retry.go:31] will retry after 33.271174209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.939435 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.939505 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.439398 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.439739 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.939479 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.939568 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.939898 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:33.939952 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:34.439808 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.439880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.440131 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:34.939954 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.940033 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.940352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.439094 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.439177 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.439525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:36.439299 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.439392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.439763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:36.439816 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:36.939492 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.939567 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.939939 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.439667 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.439746 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.440054 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.939849 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.939922 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.940282 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.439087 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.439168 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.439498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:38.939564 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:39.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:39.939340 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.439520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.939185 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:40.939662 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:41.439347 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.439746 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:41.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.939550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.939307 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.939384 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:42.939786 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:43.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:43.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.939249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.939486 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:45.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.439295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.439645 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:45.439707 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:45.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:47.439327 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.439404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.439724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:47.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:47.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.439273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:49.404519 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:49.440015 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.440083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.440326 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:49.440365 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:49.475362 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475396 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475480 1842604 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:51:49.939258 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.939331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.439273 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.439350 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.939510 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.939742 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:51.939799 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:52.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.439527 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:52.939172 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.439284 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.439357 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.439683 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:54.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.439675 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:54.439728 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:54.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.939602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.439136 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.439455 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.939488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:56.939531 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:57.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:57.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.439522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.939289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.939635 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:58.939699 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:59.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.439571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:59.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.939462 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.439330 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.439711 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:01.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:01.439533 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:01.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.439326 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.439405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.439723 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.939547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:03.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:03.439656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:03.939333 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.939982 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:04.439870 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.439950 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.441104 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 02:52:04.939902 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.939976 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.940342 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:05.439974 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.440051 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.440366 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:05.440422 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:05.939070 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.939144 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.939427 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.018765 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:52:06.088979 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089023 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089108 1842604 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:52:06.092116 1842604 out.go:179] * Enabled addons: 
	I1216 02:52:06.094111 1842604 addons.go:530] duration metric: took 1m58.134103468s for enable addons: enabled=[]
	I1216 02:52:06.439418 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.439511 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.439875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.939605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.439870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:07.939631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:08.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.439668 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:08.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:09.939712 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:10.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:10.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.439325 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:12.439170 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:12.439614 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:12.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:14.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.439618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:14.439675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:14.939160 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.939166 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.939565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:16.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:16.439751 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:16.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.939474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:18.439316 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.439414 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.439810 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:18.439875 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:18.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.439217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.939594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:20.939572 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:21.439266 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.439341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:21.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.939580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.439532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.939572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:22.939619 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:23.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:23.939140 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.439648 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.939341 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.939745 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:24.939803 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:25.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:25.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:27.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.439562 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:27.439608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:27.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:29.439248 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.439328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.439689 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:29.439742 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:29.939139 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.939549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.939397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.939726 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.439487 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.939215 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.939632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:31.939687 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:32.439388 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.439773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:32.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.939546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.939270 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.939356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:33.939787 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:34.439540 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.439618 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:34.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:36.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.439305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:36.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:36.939319 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.939408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.939776 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.439547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:38.439553 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.439631 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.439986 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:38.440048 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.939530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.939328 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:40.939636 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:41.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:41.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.439574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.939650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:42.939708 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:43.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:43.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.939585 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.439396 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.439479 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.439801 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:45.439245 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.439326 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.439625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:45.439674 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:45.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.939135 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:47.939618 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:48.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.439619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:48.939338 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.939771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.439504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:49.939679 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:50.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:50.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.439257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.939737 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:51.939789 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:52.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:52.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.939262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.439314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.939491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:54.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.439271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:54.439659 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:54.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.939623 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:56.439309 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.439391 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:56.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:56.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.939545 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:58.939617 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:59.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:59.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:00.939649 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:01.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:01.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.439351 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.439735 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.939471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:03.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:03.439650 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:03.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.939395 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.939207 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.939477 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:05.939529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:06.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.439240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:06.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.939724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.439508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.939335 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.939685 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:07.939739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:08.439494 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.439903 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:08.939686 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.939764 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.940063 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.440032 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.440108 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.440422 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:10.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:10.439589 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:10.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.439598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:12.439168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:12.439639 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:12.939298 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.939374 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:14.439387 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.439472 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.439812 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:14.439867 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:14.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.939501 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:16.939656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:17.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.439397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.439744 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:17.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.439237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.439576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.939765 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:18.939822 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:19.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:19.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.439182 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:21.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.439583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:21.439640 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:21.939297 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.939731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.439207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.939299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:23.439346 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.439421 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.439771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:23.439824 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:23.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.939483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.439299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.439632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.939319 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.439591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:25.939638 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:26.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:26.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.939259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:28.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:28.439529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:28.939176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.439313 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.439393 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.439725 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:30.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:30.439655 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:30.939320 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.939418 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.439475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.939534 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:32.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.439620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:32.439676 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:32.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.439245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:34.939677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:35.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:35.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.939263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.439228 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.939277 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.939377 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.939732 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:36.939794 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:37.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:37.939236 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.939314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.439417 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.439490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.439842 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:39.439409 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.439482 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.439830 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:39.439883 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:39.939560 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.939640 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.939973 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.439727 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.439800 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.440066 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.939896 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.939970 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.940284 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:41.440124 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.440201 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.440497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:41.440543 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:41.943162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.943240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.943561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.439267 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.439351 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.939415 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.939490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.939836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.439215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.439499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:43.939661 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:44.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:44.939247 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.939347 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.939662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.439436 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.439789 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.939376 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.939453 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.939756 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:45.939804 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:46.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.439210 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:46.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.939525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.439589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.939329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.939704 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:48.439754 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.439848 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.440188 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:48.440247 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:48.939992 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.940067 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.940363 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.439133 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.439200 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.939646 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.439384 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.439476 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:50.939521 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:51.939222 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.439241 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.439313 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.939395 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.939474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.939857 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:52.939915 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:53.439594 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.439672 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.440023 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:53.939720 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.940050 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.440097 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.440176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.440526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:55.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.439505 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:55.439556 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:55.939269 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.939344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.939702 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.439437 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.439519 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.439881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.939498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:57.439223 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.439303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:57.439739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:57.939423 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.939507 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.939912 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.439813 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.439885 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.440183 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.940058 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.940134 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.940478 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.439567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.939485 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:59.939525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:00.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.439472 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.939561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:01.939606 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:02.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:02.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.444773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1216 02:54:04.444839 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:04.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.939528 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.439610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:06.939675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:07.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.439235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:07.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.939834 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.439782 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.439866 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.440217 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.939622 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.939691 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.939951 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:08.939990 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:09.439901 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.439984 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.440340 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:09.940011 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.940412 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.440017 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.440085 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.440369 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.939087 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.939163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:11.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.439309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.439891 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:11.439947 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:11.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.939684 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.939946 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.439716 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.439790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.440122 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.939802 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.939880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.940191 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:13.439954 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.440025 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.440286 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:13.440326 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:13.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.939162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.439289 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.439670 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.939235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.439219 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.939184 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:15.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:16.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:16.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.939531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.939240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:18.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.439286 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.439581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:18.439631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:18.939322 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.939736 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.439493 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:20.439464 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.439552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.439931 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:20.439986 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:20.939676 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.939743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.940000 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.439750 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.439826 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.440155 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.939838 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.939912 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.940244 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:22.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.440003 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.440320 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:22.440370 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:22.940106 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.940183 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.940523 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.939141 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.439339 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.439743 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.939456 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.939535 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.939838 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:24.939885 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:25.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.439298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:25.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.939504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:27.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:27.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:27.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.439179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.439531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.939224 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.939682 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:29.439497 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:29.439924 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:29.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.939230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.939551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.939364 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.939698 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:31.939756 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:32.439434 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.439515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.439885 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:32.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.439177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.439263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.939412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:33.939818 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:34.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.439568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:34.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.939595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:36.439210 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.439644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:36.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:36.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.939300 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.939615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:38.439345 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.439428 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.439811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:38.439872 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:38.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.939515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.439608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.939380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.939747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.439481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:40.939620 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:41.439300 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.439721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:41.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.439561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.939360 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.939717 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:42.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:43.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.439482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:43.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.939158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:45.439421 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.439558 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.440302 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:45.440450 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:45.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.939642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.439549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.940079 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.940162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.940421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:47.940463 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.439275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:48.939330 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.939700 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.439553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:50.439334 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:50.439774 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:50.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.439563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.439158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.439503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:52.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:53.439238 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.439324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:53.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.939529 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.439450 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.439530 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.439887 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.939607 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.939685 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:54.940065 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:55.439530 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.439603 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.439907 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.439329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.439662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.939519 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:57.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.439641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:57.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:57.939371 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.939446 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.939781 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:59.939684 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:00.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.439758 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:00.939212 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.939641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:01.939706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:02.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.439808 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:02.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.439530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:04.439579 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:04.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.939672 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.439280 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.439751 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.939466 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:06.439161 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:06.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:06.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.939644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.439279 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.439353 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.439621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.939358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.939442 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.939811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:08.439810 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.439889 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.440239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:08.440291 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:08.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.939681 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.939929 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.439800 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.439881 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.440208 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.939526 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.939604 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.939943 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.439719 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.439792 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.440067 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.939812 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.939897 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.940243 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:10.940297 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:11.440018 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.440100 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.440421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:11.939675 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.939759 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.940020 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.439848 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.439919 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.440237 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.940077 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.940153 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.940509 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:12.940583 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:13.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:13.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.939610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.439597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.939566 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:15.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.439663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:15.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:15.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.939245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.939569 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.439533 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:17.439324 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:17.439815 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:17.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.939524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.439468 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:19.939616 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:20.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.439548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:20.939137 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.939207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.939465 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.439657 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:21.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:22.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.439535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.439305 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.439389 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.439720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.939542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:24.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:24.439651 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:24.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.939598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.439463 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.439089 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.439163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:26.939524 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:27.439214 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.439661 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:27.939373 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.939451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.939805 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.939221 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.939614 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:28.939667 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:29.439402 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.439474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.439787 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:29.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.439272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.939323 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.939401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.939721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:30.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:31.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:31.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.939328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.939663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.439376 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.439451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.439836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.939913 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.939990 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.940359 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:32.940407 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:33.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.439212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:33.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.439249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.439588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:35.439211 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:35.439710 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:35.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.939499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:37.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:38.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.439750 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:38.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.439559 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:40.439611 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:40.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.939558 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.439280 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:42.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:42.439668 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:42.939362 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.939437 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.939760 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.439213 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:44.439226 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:44.439691 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:44.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.439572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.939300 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.939383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.939237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:46.939647 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:47.439293 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.439366 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.439696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:47.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.939541 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.939238 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.939317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:49.439174 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.439543 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:49.439584 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:49.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.439290 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.943201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.943274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.943612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:51.439328 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:51.439790 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:51.939213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.939297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.939626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.439526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.939601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.439284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.939494 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:53.939534 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:54.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.439304 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:54.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.939622 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.439163 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.439516 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.939231 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.939312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.939665 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:55.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:56.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.439461 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.439770 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:56.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.439287 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.439372 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.939469 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.939545 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.939881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:57.939934 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:58.439663 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.439743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.440003 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:58.939830 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.939903 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.940228 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.439135 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.939583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:00.439256 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.439337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.439709 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:00.439775 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:00.939210 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.439602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:02.939560 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:03.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.439540 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:03.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.939867 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:04.939916 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:05.439564 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.439639 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.439983 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:05.939766 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.939842 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.940108 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.439869 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.439941 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.440295 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.940114 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.940198 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.940608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:06.940665 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:07.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:07.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.939618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.439510 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:08.439620 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:08.440275 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.939119 1842604 node_ready.go:38] duration metric: took 6m0.000151723s for node "functional-389759" to be "Ready" ...
	I1216 02:56:08.942443 1842604 out.go:203] 
	W1216 02:56:08.945313 1842604 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 02:56:08.945510 1842604 out.go:285] * 
	W1216 02:56:08.947818 1842604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:56:08.950773 1842604 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:56:16 functional-389759 containerd[5252]: time="2025-12-16T02:56:16.500864545Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:17 functional-389759 containerd[5252]: time="2025-12-16T02:56:17.584917119Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 16 02:56:17 functional-389759 containerd[5252]: time="2025-12-16T02:56:17.586986759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 16 02:56:17 functional-389759 containerd[5252]: time="2025-12-16T02:56:17.595442921Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:17 functional-389759 containerd[5252]: time="2025-12-16T02:56:17.596292030Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:18 functional-389759 containerd[5252]: time="2025-12-16T02:56:18.548403756Z" level=info msg="No images store for sha256:a041f367bbd28fc7382515453bcff124b359f5670c41bb859c301862fece5890"
	Dec 16 02:56:18 functional-389759 containerd[5252]: time="2025-12-16T02:56:18.550521214Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-389759\""
	Dec 16 02:56:18 functional-389759 containerd[5252]: time="2025-12-16T02:56:18.558262379Z" level=info msg="ImageCreate event name:\"sha256:106be42faa4dd4147343df978553d005570a8f7212a8499769ed94b75df65cdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:18 functional-389759 containerd[5252]: time="2025-12-16T02:56:18.558804072Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-389759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:19 functional-389759 containerd[5252]: time="2025-12-16T02:56:19.349318710Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 16 02:56:19 functional-389759 containerd[5252]: time="2025-12-16T02:56:19.351872443Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 16 02:56:19 functional-389759 containerd[5252]: time="2025-12-16T02:56:19.353850252Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 16 02:56:19 functional-389759 containerd[5252]: time="2025-12-16T02:56:19.366018149Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.343034268Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.345466444Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.347430945Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.355266975Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.519415516Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.521760604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.531285366Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.531633659Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.652877251Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.654971957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.662261093Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.663564330Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:56:22.428947    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:22.429603    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:22.431185    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:22.431632    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:22.433062    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:56:22 up  8:38,  0 user,  load average: 0.92, 0.42, 0.85
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 02:56:19 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:20 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 16 02:56:20 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:20 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:20 functional-389759 kubelet[9020]: E1216 02:56:20.261464    9020 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:20 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:20 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:20 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 16 02:56:20 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:20 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:21 functional-389759 kubelet[9119]: E1216 02:56:21.002177    9119 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:21 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:21 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:21 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 16 02:56:21 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:21 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:21 functional-389759 kubelet[9142]: E1216 02:56:21.754391    9142 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:21 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:21 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:22 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 16 02:56:22 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:22 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:22 functional-389759 kubelet[9231]: E1216 02:56:22.494255    9231 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:22 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:22 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (402.169074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-389759 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-389759 get pods: exit status 1 (117.229305ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-389759 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (305.207715ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-853651 image ls --format json --alsologtostderr                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format table --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls                                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ delete         │ -p functional-853651                                                                                                                                    │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-389759 --alsologtostderr -v=8                                                                                                             │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:50 UTC │                     │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:latest                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add minikube-local-cache-test:functional-389759                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache delete minikube-local-cache-test:functional-389759                                                                              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl images                                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	│ cache          │ functional-389759 cache reload                                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ kubectl        │ functional-389759 kubectl -- --context functional-389759 get pods                                                                                       │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:50:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:50:03.940449 1842604 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:50:03.940640 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.940666 1842604 out.go:374] Setting ErrFile to fd 2...
	I1216 02:50:03.940685 1842604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:50:03.941001 1842604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:50:03.941424 1842604 out.go:368] Setting JSON to false
	I1216 02:50:03.942302 1842604 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30748,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:50:03.942395 1842604 start.go:143] virtualization:  
	I1216 02:50:03.948050 1842604 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:50:03.951289 1842604 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:50:03.951381 1842604 notify.go:221] Checking for updates...
	I1216 02:50:03.954734 1842604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:50:03.957600 1842604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:03.960611 1842604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:50:03.963508 1842604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:50:03.966329 1842604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:50:03.969672 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:03.969806 1842604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:50:04.007031 1842604 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:50:04.007241 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.073702 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.062313817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.073812 1842604 docker.go:319] overlay module found
	I1216 02:50:04.077006 1842604 out.go:179] * Using the docker driver based on existing profile
	I1216 02:50:04.079902 1842604 start.go:309] selected driver: docker
	I1216 02:50:04.079932 1842604 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.080054 1842604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:50:04.080179 1842604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:50:04.136011 1842604 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 02:50:04.126842192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:50:04.136427 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:04.136482 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:04.136533 1842604 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:04.139723 1842604 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:50:04.142545 1842604 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:50:04.145565 1842604 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:50:04.148399 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:04.148453 1842604 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:50:04.148469 1842604 cache.go:65] Caching tarball of preloaded images
	I1216 02:50:04.148474 1842604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:50:04.148567 1842604 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:50:04.148577 1842604 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:50:04.148682 1842604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:50:04.168498 1842604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:50:04.168522 1842604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:50:04.168544 1842604 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:50:04.168575 1842604 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:50:04.168643 1842604 start.go:364] duration metric: took 46.539µs to acquireMachinesLock for "functional-389759"
	I1216 02:50:04.168667 1842604 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:50:04.168673 1842604 fix.go:54] fixHost starting: 
	I1216 02:50:04.168962 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:04.192862 1842604 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:50:04.192891 1842604 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:50:04.196202 1842604 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:50:04.196246 1842604 machine.go:94] provisionDockerMachine start ...
	I1216 02:50:04.196329 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.213973 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.214316 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.214325 1842604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:50:04.350600 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.350628 1842604 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:50:04.350691 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.368974 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.369299 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.369316 1842604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:50:04.513062 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:50:04.513215 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.531552 1842604 main.go:143] libmachine: Using SSH client type: native
	I1216 02:50:04.531870 1842604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:50:04.531893 1842604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:50:04.663498 1842604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:50:04.663573 1842604 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:50:04.663612 1842604 ubuntu.go:190] setting up certificates
	I1216 02:50:04.663658 1842604 provision.go:84] configureAuth start
	I1216 02:50:04.663756 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:04.681830 1842604 provision.go:143] copyHostCerts
	I1216 02:50:04.681871 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681914 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:50:04.681921 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:50:04.681996 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:50:04.682080 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682098 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:50:04.682107 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:50:04.682134 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:50:04.682171 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682188 1842604 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:50:04.682192 1842604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:50:04.682218 1842604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:50:04.682263 1842604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:50:04.918732 1842604 provision.go:177] copyRemoteCerts
	I1216 02:50:04.918803 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:50:04.918909 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:04.945401 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.043237 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 02:50:05.043301 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:50:05.061641 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 02:50:05.061702 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:50:05.079841 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 02:50:05.079956 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:50:05.097722 1842604 provision.go:87] duration metric: took 434.019439ms to configureAuth
	I1216 02:50:05.097754 1842604 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:50:05.097953 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:05.097967 1842604 machine.go:97] duration metric: took 901.714132ms to provisionDockerMachine
	I1216 02:50:05.097975 1842604 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:50:05.097987 1842604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:50:05.098051 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:50:05.098102 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.115383 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.211319 1842604 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:50:05.214768 1842604 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 02:50:05.214793 1842604 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 02:50:05.214797 1842604 command_runner.go:130] > VERSION_ID="12"
	I1216 02:50:05.214802 1842604 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 02:50:05.214807 1842604 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 02:50:05.214810 1842604 command_runner.go:130] > ID=debian
	I1216 02:50:05.214815 1842604 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 02:50:05.214820 1842604 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 02:50:05.214826 1842604 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 02:50:05.214871 1842604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:50:05.214894 1842604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:50:05.214911 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:50:05.214973 1842604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:50:05.215088 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:50:05.215101 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /etc/ssl/certs/17983702.pem
	I1216 02:50:05.215203 1842604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:50:05.215211 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> /etc/test/nested/copy/1798370/hosts
	I1216 02:50:05.215287 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:50:05.223273 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:05.241790 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:50:05.259718 1842604 start.go:296] duration metric: took 161.727689ms for postStartSetup
	I1216 02:50:05.259801 1842604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:50:05.259846 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.277760 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.371870 1842604 command_runner.go:130] > 18%
	I1216 02:50:05.372496 1842604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:50:05.377201 1842604 command_runner.go:130] > 161G
	I1216 02:50:05.377708 1842604 fix.go:56] duration metric: took 1.209030723s for fixHost
	I1216 02:50:05.377728 1842604 start.go:83] releasing machines lock for "functional-389759", held for 1.209073027s
	I1216 02:50:05.377811 1842604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:50:05.395427 1842604 ssh_runner.go:195] Run: cat /version.json
	I1216 02:50:05.395497 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.395795 1842604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:50:05.395856 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:05.414621 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.417076 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:05.510754 1842604 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 02:50:05.510902 1842604 ssh_runner.go:195] Run: systemctl --version
	I1216 02:50:05.609923 1842604 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 02:50:05.612841 1842604 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 02:50:05.612896 1842604 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 02:50:05.613034 1842604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 02:50:05.617736 1842604 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 02:50:05.617774 1842604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:50:05.617838 1842604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:50:05.626000 1842604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:50:05.626028 1842604 start.go:496] detecting cgroup driver to use...
	I1216 02:50:05.626059 1842604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:50:05.626109 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:50:05.644077 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:50:05.659636 1842604 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:50:05.659709 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:50:05.676805 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:50:05.692573 1842604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:50:05.816755 1842604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:50:05.944883 1842604 docker.go:234] disabling docker service ...
	I1216 02:50:05.944952 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:50:05.960111 1842604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:50:05.973273 1842604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:50:06.102700 1842604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:50:06.226099 1842604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:50:06.239914 1842604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:50:06.254235 1842604 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 02:50:06.255720 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:50:06.265881 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:50:06.274988 1842604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:50:06.275099 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:50:06.284319 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.293767 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:50:06.302914 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:50:06.312051 1842604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:50:06.320364 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:50:06.329464 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:50:06.338574 1842604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:50:06.347623 1842604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:50:06.354520 1842604 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 02:50:06.355609 1842604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:50:06.363468 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:06.501216 1842604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:50:06.641570 1842604 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:50:06.641646 1842604 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:50:06.645599 1842604 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1216 02:50:06.645623 1842604 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 02:50:06.645629 1842604 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1216 02:50:06.645636 1842604 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:06.645642 1842604 command_runner.go:130] > Access: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645647 1842604 command_runner.go:130] > Modify: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645652 1842604 command_runner.go:130] > Change: 2025-12-16 02:50:06.598352293 +0000
	I1216 02:50:06.645656 1842604 command_runner.go:130] >  Birth: -
	I1216 02:50:06.645685 1842604 start.go:564] Will wait 60s for crictl version
	I1216 02:50:06.645740 1842604 ssh_runner.go:195] Run: which crictl
	I1216 02:50:06.649139 1842604 command_runner.go:130] > /usr/local/bin/crictl
	I1216 02:50:06.649430 1842604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:50:06.676623 1842604 command_runner.go:130] > Version:  0.1.0
	I1216 02:50:06.676645 1842604 command_runner.go:130] > RuntimeName:  containerd
	I1216 02:50:06.676661 1842604 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1216 02:50:06.676671 1842604 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 02:50:06.676683 1842604 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:50:06.676740 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.701508 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.703452 1842604 ssh_runner.go:195] Run: containerd --version
	I1216 02:50:06.721412 1842604 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1216 02:50:06.729453 1842604 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:50:06.732626 1842604 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:50:06.754519 1842604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:50:06.758684 1842604 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 02:50:06.758798 1842604 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:50:06.758921 1842604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:50:06.758993 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.783030 1842604 command_runner.go:130] > {
	I1216 02:50:06.783088 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.783093 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783103 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.783109 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783114 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.783117 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783121 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783130 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.783133 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783138 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.783142 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783146 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783149 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783152 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783160 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.783163 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783169 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.783172 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783176 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783185 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.783188 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783192 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.783196 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783200 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783204 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783207 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783214 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.783224 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783229 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.783232 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783243 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783251 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.783254 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783258 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.783262 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.783266 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783269 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783272 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783278 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.783282 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783287 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.783290 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783294 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783305 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.783308 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783312 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.783317 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783321 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783324 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783328 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783332 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783337 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783340 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783347 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.783351 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.783359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783363 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783370 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.783374 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783381 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.783384 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783392 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783395 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783399 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783403 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783406 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783409 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783415 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.783419 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783424 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.783427 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783431 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783439 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.783442 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783446 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.783450 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783454 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783457 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783461 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783464 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783467 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783470 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783476 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.783480 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783485 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.783488 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783492 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783499 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.783503 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783506 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.783510 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783514 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783520 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783524 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783530 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.783534 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783540 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.783543 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783546 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783554 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.783557 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.783568 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783572 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.783575 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783579 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783582 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.783585 1842604 command_runner.go:130] >     },
	I1216 02:50:06.783588 1842604 command_runner.go:130] >     {
	I1216 02:50:06.783595 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.783599 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.783604 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.783608 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783611 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.783619 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.783622 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.783625 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.783629 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.783633 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.783636 1842604 command_runner.go:130] >       },
	I1216 02:50:06.783639 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.783643 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.783646 1842604 command_runner.go:130] >     }
	I1216 02:50:06.783648 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.783651 1842604 command_runner.go:130] > }
	I1216 02:50:06.785559 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.785577 1842604 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:50:06.785637 1842604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:50:06.809062 1842604 command_runner.go:130] > {
	I1216 02:50:06.809080 1842604 command_runner.go:130] >   "images":  [
	I1216 02:50:06.809085 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809094 1842604 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 02:50:06.809099 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809105 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 02:50:06.809108 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809112 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809121 1842604 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1216 02:50:06.809125 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809129 1842604 command_runner.go:130] >       "size":  "40636774",
	I1216 02:50:06.809133 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809137 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809140 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809143 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809153 1842604 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 02:50:06.809157 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809162 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 02:50:06.809166 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809170 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809178 1842604 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 02:50:06.809181 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809186 1842604 command_runner.go:130] >       "size":  "8034419",
	I1216 02:50:06.809189 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809193 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809196 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809199 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809207 1842604 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 02:50:06.809211 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809216 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 02:50:06.809219 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809226 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809235 1842604 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1216 02:50:06.809241 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809246 1842604 command_runner.go:130] >       "size":  "21168808",
	I1216 02:50:06.809250 1842604 command_runner.go:130] >       "username":  "nonroot",
	I1216 02:50:06.809254 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809257 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809260 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809267 1842604 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 02:50:06.809270 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809276 1842604 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 02:50:06.809279 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809283 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809291 1842604 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1216 02:50:06.809294 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809298 1842604 command_runner.go:130] >       "size":  "21136588",
	I1216 02:50:06.809303 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809307 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809311 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809315 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809318 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809322 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809325 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809332 1842604 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 02:50:06.809335 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809341 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 02:50:06.809344 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809348 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809356 1842604 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1216 02:50:06.809359 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809364 1842604 command_runner.go:130] >       "size":  "24678359",
	I1216 02:50:06.809367 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809379 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809382 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809386 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809393 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809396 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809399 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809406 1842604 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 02:50:06.809410 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809416 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 02:50:06.809419 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809423 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809432 1842604 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1216 02:50:06.809435 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809439 1842604 command_runner.go:130] >       "size":  "20661043",
	I1216 02:50:06.809443 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809447 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809450 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809453 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809461 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809464 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809467 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809475 1842604 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 02:50:06.809478 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809483 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 02:50:06.809486 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809490 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809498 1842604 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 02:50:06.809501 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809505 1842604 command_runner.go:130] >       "size":  "22429671",
	I1216 02:50:06.809509 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809513 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809516 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809519 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809526 1842604 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 02:50:06.809530 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809535 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 02:50:06.809541 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809545 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809553 1842604 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1216 02:50:06.809556 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809560 1842604 command_runner.go:130] >       "size":  "15391364",
	I1216 02:50:06.809564 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809568 1842604 command_runner.go:130] >         "value":  "0"
	I1216 02:50:06.809571 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809575 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809579 1842604 command_runner.go:130] >       "pinned":  false
	I1216 02:50:06.809582 1842604 command_runner.go:130] >     },
	I1216 02:50:06.809585 1842604 command_runner.go:130] >     {
	I1216 02:50:06.809591 1842604 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 02:50:06.809595 1842604 command_runner.go:130] >       "repoTags":  [
	I1216 02:50:06.809599 1842604 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 02:50:06.809602 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809606 1842604 command_runner.go:130] >       "repoDigests":  [
	I1216 02:50:06.809614 1842604 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1216 02:50:06.809616 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.809620 1842604 command_runner.go:130] >       "size":  "267939",
	I1216 02:50:06.809624 1842604 command_runner.go:130] >       "uid":  {
	I1216 02:50:06.809627 1842604 command_runner.go:130] >         "value":  "65535"
	I1216 02:50:06.809632 1842604 command_runner.go:130] >       },
	I1216 02:50:06.809635 1842604 command_runner.go:130] >       "username":  "",
	I1216 02:50:06.809639 1842604 command_runner.go:130] >       "pinned":  true
	I1216 02:50:06.809642 1842604 command_runner.go:130] >     }
	I1216 02:50:06.809645 1842604 command_runner.go:130] >   ]
	I1216 02:50:06.809648 1842604 command_runner.go:130] > }
	I1216 02:50:06.811271 1842604 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:50:06.811300 1842604 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:50:06.811308 1842604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:50:06.811452 1842604 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:50:06.811544 1842604 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:50:06.842167 1842604 command_runner.go:130] > {
	I1216 02:50:06.842188 1842604 command_runner.go:130] >   "cniconfig": {
	I1216 02:50:06.842194 1842604 command_runner.go:130] >     "Networks": [
	I1216 02:50:06.842198 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842203 1842604 command_runner.go:130] >         "Config": {
	I1216 02:50:06.842208 1842604 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1216 02:50:06.842213 1842604 command_runner.go:130] >           "Name": "cni-loopback",
	I1216 02:50:06.842217 1842604 command_runner.go:130] >           "Plugins": [
	I1216 02:50:06.842220 1842604 command_runner.go:130] >             {
	I1216 02:50:06.842224 1842604 command_runner.go:130] >               "Network": {
	I1216 02:50:06.842229 1842604 command_runner.go:130] >                 "ipam": {},
	I1216 02:50:06.842234 1842604 command_runner.go:130] >                 "type": "loopback"
	I1216 02:50:06.842238 1842604 command_runner.go:130] >               },
	I1216 02:50:06.842243 1842604 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1216 02:50:06.842246 1842604 command_runner.go:130] >             }
	I1216 02:50:06.842249 1842604 command_runner.go:130] >           ],
	I1216 02:50:06.842259 1842604 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1216 02:50:06.842263 1842604 command_runner.go:130] >         },
	I1216 02:50:06.842268 1842604 command_runner.go:130] >         "IFName": "lo"
	I1216 02:50:06.842271 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842275 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842279 1842604 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1216 02:50:06.842283 1842604 command_runner.go:130] >     "PluginDirs": [
	I1216 02:50:06.842287 1842604 command_runner.go:130] >       "/opt/cni/bin"
	I1216 02:50:06.842291 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842298 1842604 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1216 02:50:06.842301 1842604 command_runner.go:130] >     "Prefix": "eth"
	I1216 02:50:06.842304 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842308 1842604 command_runner.go:130] >   "config": {
	I1216 02:50:06.842312 1842604 command_runner.go:130] >     "cdiSpecDirs": [
	I1216 02:50:06.842315 1842604 command_runner.go:130] >       "/etc/cdi",
	I1216 02:50:06.842320 1842604 command_runner.go:130] >       "/var/run/cdi"
	I1216 02:50:06.842328 1842604 command_runner.go:130] >     ],
	I1216 02:50:06.842331 1842604 command_runner.go:130] >     "cni": {
	I1216 02:50:06.842335 1842604 command_runner.go:130] >       "binDir": "",
	I1216 02:50:06.842338 1842604 command_runner.go:130] >       "binDirs": [
	I1216 02:50:06.842342 1842604 command_runner.go:130] >         "/opt/cni/bin"
	I1216 02:50:06.842345 1842604 command_runner.go:130] >       ],
	I1216 02:50:06.842349 1842604 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1216 02:50:06.842352 1842604 command_runner.go:130] >       "confTemplate": "",
	I1216 02:50:06.842356 1842604 command_runner.go:130] >       "ipPref": "",
	I1216 02:50:06.842359 1842604 command_runner.go:130] >       "maxConfNum": 1,
	I1216 02:50:06.842364 1842604 command_runner.go:130] >       "setupSerially": false,
	I1216 02:50:06.842368 1842604 command_runner.go:130] >       "useInternalLoopback": false
	I1216 02:50:06.842371 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842378 1842604 command_runner.go:130] >     "containerd": {
	I1216 02:50:06.842382 1842604 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1216 02:50:06.842387 1842604 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1216 02:50:06.842392 1842604 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1216 02:50:06.842396 1842604 command_runner.go:130] >       "runtimes": {
	I1216 02:50:06.842399 1842604 command_runner.go:130] >         "runc": {
	I1216 02:50:06.842404 1842604 command_runner.go:130] >           "ContainerAnnotations": null,
	I1216 02:50:06.842415 1842604 command_runner.go:130] >           "PodAnnotations": null,
	I1216 02:50:06.842421 1842604 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1216 02:50:06.842425 1842604 command_runner.go:130] >           "cgroupWritable": false,
	I1216 02:50:06.842429 1842604 command_runner.go:130] >           "cniConfDir": "",
	I1216 02:50:06.842433 1842604 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1216 02:50:06.842436 1842604 command_runner.go:130] >           "io_type": "",
	I1216 02:50:06.842439 1842604 command_runner.go:130] >           "options": {
	I1216 02:50:06.842443 1842604 command_runner.go:130] >             "BinaryName": "",
	I1216 02:50:06.842448 1842604 command_runner.go:130] >             "CriuImagePath": "",
	I1216 02:50:06.842451 1842604 command_runner.go:130] >             "CriuWorkPath": "",
	I1216 02:50:06.842455 1842604 command_runner.go:130] >             "IoGid": 0,
	I1216 02:50:06.842458 1842604 command_runner.go:130] >             "IoUid": 0,
	I1216 02:50:06.842462 1842604 command_runner.go:130] >             "NoNewKeyring": false,
	I1216 02:50:06.842469 1842604 command_runner.go:130] >             "Root": "",
	I1216 02:50:06.842473 1842604 command_runner.go:130] >             "ShimCgroup": "",
	I1216 02:50:06.842480 1842604 command_runner.go:130] >             "SystemdCgroup": false
	I1216 02:50:06.842483 1842604 command_runner.go:130] >           },
	I1216 02:50:06.842488 1842604 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1216 02:50:06.842494 1842604 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1216 02:50:06.842499 1842604 command_runner.go:130] >           "runtimePath": "",
	I1216 02:50:06.842504 1842604 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1216 02:50:06.842508 1842604 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1216 02:50:06.842512 1842604 command_runner.go:130] >           "snapshotter": ""
	I1216 02:50:06.842515 1842604 command_runner.go:130] >         }
	I1216 02:50:06.842518 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842521 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842530 1842604 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1216 02:50:06.842535 1842604 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1216 02:50:06.842541 1842604 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1216 02:50:06.842546 1842604 command_runner.go:130] >     "disableApparmor": false,
	I1216 02:50:06.842550 1842604 command_runner.go:130] >     "disableHugetlbController": true,
	I1216 02:50:06.842554 1842604 command_runner.go:130] >     "disableProcMount": false,
	I1216 02:50:06.842558 1842604 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1216 02:50:06.842562 1842604 command_runner.go:130] >     "enableCDI": true,
	I1216 02:50:06.842565 1842604 command_runner.go:130] >     "enableSelinux": false,
	I1216 02:50:06.842569 1842604 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1216 02:50:06.842573 1842604 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1216 02:50:06.842578 1842604 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1216 02:50:06.842582 1842604 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1216 02:50:06.842586 1842604 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1216 02:50:06.842590 1842604 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1216 02:50:06.842595 1842604 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1216 02:50:06.842600 1842604 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842604 1842604 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1216 02:50:06.842610 1842604 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1216 02:50:06.842614 1842604 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1216 02:50:06.842622 1842604 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1216 02:50:06.842625 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842628 1842604 command_runner.go:130] >   "features": {
	I1216 02:50:06.842632 1842604 command_runner.go:130] >     "supplemental_groups_policy": true
	I1216 02:50:06.842635 1842604 command_runner.go:130] >   },
	I1216 02:50:06.842639 1842604 command_runner.go:130] >   "golang": "go1.24.9",
	I1216 02:50:06.842649 1842604 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842658 1842604 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1216 02:50:06.842662 1842604 command_runner.go:130] >   "runtimeHandlers": [
	I1216 02:50:06.842665 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842668 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842672 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842676 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842679 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842682 1842604 command_runner.go:130] >     },
	I1216 02:50:06.842685 1842604 command_runner.go:130] >     {
	I1216 02:50:06.842688 1842604 command_runner.go:130] >       "features": {
	I1216 02:50:06.842693 1842604 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1216 02:50:06.842697 1842604 command_runner.go:130] >         "user_namespaces": true
	I1216 02:50:06.842700 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842703 1842604 command_runner.go:130] >       "name": "runc"
	I1216 02:50:06.842706 1842604 command_runner.go:130] >     }
	I1216 02:50:06.842709 1842604 command_runner.go:130] >   ],
	I1216 02:50:06.842713 1842604 command_runner.go:130] >   "status": {
	I1216 02:50:06.842716 1842604 command_runner.go:130] >     "conditions": [
	I1216 02:50:06.842719 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842723 1842604 command_runner.go:130] >         "message": "",
	I1216 02:50:06.842730 1842604 command_runner.go:130] >         "reason": "",
	I1216 02:50:06.842734 1842604 command_runner.go:130] >         "status": true,
	I1216 02:50:06.842739 1842604 command_runner.go:130] >         "type": "RuntimeReady"
	I1216 02:50:06.842742 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842745 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842756 1842604 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1216 02:50:06.842764 1842604 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1216 02:50:06.842775 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842779 1842604 command_runner.go:130] >         "type": "NetworkReady"
	I1216 02:50:06.842782 1842604 command_runner.go:130] >       },
	I1216 02:50:06.842785 1842604 command_runner.go:130] >       {
	I1216 02:50:06.842816 1842604 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1216 02:50:06.842831 1842604 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1216 02:50:06.842837 1842604 command_runner.go:130] >         "status": false,
	I1216 02:50:06.842848 1842604 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1216 02:50:06.842852 1842604 command_runner.go:130] >       }
	I1216 02:50:06.842855 1842604 command_runner.go:130] >     ]
	I1216 02:50:06.842857 1842604 command_runner.go:130] >   }
	I1216 02:50:06.842860 1842604 command_runner.go:130] > }
	I1216 02:50:06.845895 1842604 cni.go:84] Creating CNI manager for ""
	I1216 02:50:06.845921 1842604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:50:06.845936 1842604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:50:06.845966 1842604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:50:06.846165 1842604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:50:06.846270 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:50:06.854737 1842604 command_runner.go:130] > kubeadm
	I1216 02:50:06.854757 1842604 command_runner.go:130] > kubectl
	I1216 02:50:06.854762 1842604 command_runner.go:130] > kubelet
	I1216 02:50:06.854790 1842604 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:50:06.854884 1842604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:50:06.863474 1842604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:50:06.877235 1842604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:50:06.893176 1842604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 02:50:06.907542 1842604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:50:06.911554 1842604 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 02:50:06.912008 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.032285 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:07.187841 1842604 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:50:07.187907 1842604 certs.go:195] generating shared ca certs ...
	I1216 02:50:07.187938 1842604 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.188113 1842604 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:50:07.188262 1842604 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:50:07.188293 1842604 certs.go:257] generating profile certs ...
	I1216 02:50:07.188479 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:50:07.188626 1842604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:50:07.188704 1842604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:50:07.188746 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 02:50:07.188833 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 02:50:07.188865 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 02:50:07.188913 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 02:50:07.188955 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 02:50:07.188991 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 02:50:07.189039 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 02:50:07.189094 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 02:50:07.189217 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:50:07.189294 1842604 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:50:07.189332 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:50:07.189413 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:50:07.189488 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:50:07.189568 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:50:07.189665 1842604 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:50:07.189733 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.189792 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem -> /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.189829 1842604 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.192734 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:50:07.215749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:50:07.236395 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:50:07.256110 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:50:07.276540 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:50:07.296274 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:50:07.314749 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:50:07.333206 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:50:07.351818 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:50:07.370275 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:50:07.390851 1842604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:50:07.409219 1842604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:50:07.421911 1842604 ssh_runner.go:195] Run: openssl version
	I1216 02:50:07.427966 1842604 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 02:50:07.428408 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.436062 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:50:07.443738 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447498 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447742 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.447801 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:50:07.490768 1842604 command_runner.go:130] > b5213941
	I1216 02:50:07.491273 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:50:07.498894 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.506703 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:50:07.514440 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518338 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518429 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.518508 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:50:07.559640 1842604 command_runner.go:130] > 51391683
	I1216 02:50:07.560095 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:50:07.567522 1842604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.574982 1842604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:50:07.582626 1842604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586721 1842604 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586817 1842604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.586878 1842604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:50:07.628240 1842604 command_runner.go:130] > 3ec20f2e
	I1216 02:50:07.628688 1842604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:50:07.636367 1842604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640270 1842604 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:50:07.640300 1842604 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 02:50:07.640307 1842604 command_runner.go:130] > Device: 259,1	Inode: 2346079     Links: 1
	I1216 02:50:07.640313 1842604 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 02:50:07.640320 1842604 command_runner.go:130] > Access: 2025-12-16 02:45:59.904024015 +0000
	I1216 02:50:07.640326 1842604 command_runner.go:130] > Modify: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640331 1842604 command_runner.go:130] > Change: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640338 1842604 command_runner.go:130] >  Birth: 2025-12-16 02:41:55.041815095 +0000
	I1216 02:50:07.640415 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:50:07.685787 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.686316 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:50:07.726862 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.727358 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:50:07.769278 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.769775 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:50:07.810792 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.811300 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:50:07.852245 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.852345 1842604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:50:07.894213 1842604 command_runner.go:130] > Certificate will not expire
	I1216 02:50:07.894706 1842604 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:50:07.894832 1842604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:50:07.894910 1842604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:50:07.922900 1842604 cri.go:89] found id: ""
	I1216 02:50:07.922983 1842604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:50:07.930226 1842604 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 02:50:07.930256 1842604 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 02:50:07.930263 1842604 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 02:50:07.931439 1842604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:50:07.931499 1842604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:50:07.931562 1842604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:50:07.943740 1842604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:50:07.944155 1842604 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.944257 1842604 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389759" cluster setting kubeconfig missing "functional-389759" context setting]
	I1216 02:50:07.944564 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.945009 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.945157 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:07.945886 1842604 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 02:50:07.945970 1842604 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 02:50:07.945985 1842604 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 02:50:07.945994 1842604 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 02:50:07.946007 1842604 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 02:50:07.946011 1842604 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 02:50:07.946339 1842604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:50:07.958263 1842604 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 02:50:07.958306 1842604 kubeadm.go:602] duration metric: took 26.787333ms to restartPrimaryControlPlane
	I1216 02:50:07.958316 1842604 kubeadm.go:403] duration metric: took 63.631777ms to StartCluster
	I1216 02:50:07.958333 1842604 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.958427 1842604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:07.959238 1842604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:50:07.959525 1842604 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 02:50:07.959950 1842604 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:50:07.960006 1842604 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 02:50:07.960112 1842604 addons.go:70] Setting storage-provisioner=true in profile "functional-389759"
	I1216 02:50:07.960129 1842604 addons.go:239] Setting addon storage-provisioner=true in "functional-389759"
	I1216 02:50:07.960152 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:07.960945 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.961166 1842604 addons.go:70] Setting default-storageclass=true in profile "functional-389759"
	I1216 02:50:07.961188 1842604 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389759"
	I1216 02:50:07.961453 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:07.966091 1842604 out.go:179] * Verifying Kubernetes components...
	I1216 02:50:07.968861 1842604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:50:07.999405 1842604 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:50:08.003951 1842604 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.003988 1842604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:50:08.004070 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.016743 1842604 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:50:08.016935 1842604 kapi.go:59] client config for functional-389759: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 02:50:08.017229 1842604 addons.go:239] Setting addon default-storageclass=true in "functional-389759"
	I1216 02:50:08.017278 1842604 host.go:66] Checking if "functional-389759" exists ...
	I1216 02:50:08.017759 1842604 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:50:08.056545 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.063547 1842604 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.063573 1842604 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:50:08.063643 1842604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:50:08.096801 1842604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:50:08.182820 1842604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:50:08.204300 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:08.216429 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:08.938921 1842604 node_ready.go:35] waiting up to 6m0s for node "functional-389759" to be "Ready" ...
	I1216 02:50:08.939066 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:08.939127 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939127 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1216 02:50:08.939303 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939364 1842604 retry.go:31] will retry after 371.599151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939463 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:08.939655 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939681 1842604 retry.go:31] will retry after 208.586178ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:08.939821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.149421 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.213240 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.213284 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.213304 1842604 retry.go:31] will retry after 201.914515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.311585 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.373333 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.373376 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.373396 1842604 retry.go:31] will retry after 439.688248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.415509 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:09.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:09.483422 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.483469 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.483489 1842604 retry.go:31] will retry after 841.778226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.814006 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:09.876109 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:09.880285 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.880327 1842604 retry.go:31] will retry after 574.892877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:09.939502 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:09.939583 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:09.939923 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.325447 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:10.394946 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.394995 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.395014 1842604 retry.go:31] will retry after 1.198470662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.439106 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.439176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.439428 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:10.455825 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:10.523765 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:10.523815 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.523848 1842604 retry.go:31] will retry after 636.325982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:10.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:10.939367 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:10.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:10.939781 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:11.161191 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:11.242833 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.242908 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.242943 1842604 retry.go:31] will retry after 1.140424726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.439654 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:11.594053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:11.649408 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:11.653163 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.653194 1842604 retry.go:31] will retry after 1.344955883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:11.939594 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:11.939687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:11.940009 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.383614 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:12.439264 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.440165 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:12.443835 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.443867 1842604 retry.go:31] will retry after 2.819298169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:12.939234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:12.939324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:12.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:12.999127 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:13.066096 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:13.066142 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.066177 1842604 retry.go:31] will retry after 2.29209329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:13.439591 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.439676 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.440017 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:13.440078 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:13.939859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:13.939946 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:13.940333 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.439599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:14.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:15.264053 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:15.323662 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.327080 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.327109 1842604 retry.go:31] will retry after 3.65241611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.359324 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:15.421588 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:15.421635 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.421654 1842604 retry.go:31] will retry after 1.62104706s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:15.439778 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.439879 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.440170 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:15.440216 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:15.940008 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:15.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:15.940410 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.440078 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.440155 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.440450 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:16.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:16.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:16.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:17.043912 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:17.099707 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:17.103362 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.103395 1842604 retry.go:31] will retry after 4.481188348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:17.439835 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.439929 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.440261 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:17.440327 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:17.940004 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:17.940083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:17.940382 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.439649 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:18.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:18.939696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:18.980018 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:19.042087 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:19.045748 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.045786 1842604 retry.go:31] will retry after 3.780614615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:19.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:19.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:19.939337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:19.939666 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:19.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:20.439426 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.439516 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.439851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:20.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:20.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:20.939502 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.439268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:21.585043 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:21.648279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:21.648322 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.648342 1842604 retry.go:31] will retry after 5.326379112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:21.939713 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:21.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:21.940115 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:21.940177 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:22.439859 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.439927 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.440196 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:22.826669 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:22.887724 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:22.891256 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.891291 1842604 retry.go:31] will retry after 7.007720529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:22.939466 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:22.939552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:22.939870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.439633 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.439715 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.440036 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:23.939677 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:23.939748 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:23.940008 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:24.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.440005 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.440343 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:24.440400 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:24.939690 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:24.939766 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:24.940068 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.439712 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.439799 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.440085 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:25.939947 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:25.940024 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:25.940358 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.439107 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.439185 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:26.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:26.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:26.939570 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:26.939627 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:26.975786 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:27.047539 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:27.047579 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.047598 1842604 retry.go:31] will retry after 10.416340882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:27.439244 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.439321 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:27.939345 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:27.939450 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:27.939785 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.439255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.439518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:28.939274 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:28.939371 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:28.939720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:28.939777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:29.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.899356 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:29.940020 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:29.940094 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:29.940346 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:29.975996 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:29.976895 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:29.976922 1842604 retry.go:31] will retry after 13.637319362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:30.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.439575 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:30.939293 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:30.939381 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:30.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:31.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:31.439575 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:31.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:31.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:31.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.439784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:32.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:32.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:32.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:33.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.439356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:33.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:33.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:33.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:33.939591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.439634 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.439714 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.439961 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:34.939579 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:34.939653 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:34.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:35.439846 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.439925 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.440292 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:35.440352 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:35.940006 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:35.940080 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:35.940335 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.440174 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.440258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.440580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:36.939312 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:36.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:36.939727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.439556 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:37.464839 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:37.535691 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:37.535727 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.535748 1842604 retry.go:31] will retry after 13.417840341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:37.939229 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:37.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:37.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:37.939658 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:38.439518 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.439602 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.439942 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:38.939665 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:38.939784 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:38.940059 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.440088 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.440162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.440456 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:39.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:39.939587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:40.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.439491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:40.439540 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:40.939209 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:40.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:40.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:41.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:41.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:41.939552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:42.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.439616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:42.439677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:42.939334 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:42.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:42.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:43.615150 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:50:43.680878 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:43.680928 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.680947 1842604 retry.go:31] will retry after 17.388789533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:43.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:43.939409 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:43.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:44.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:44.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:44.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:44.939567 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:45.439260 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.439687 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:45.939401 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:45.939486 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:45.939829 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:46.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:46.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:46.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:46.939686 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:47.439343 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.439416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:47.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:47.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:47.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.439361 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.439738 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:48.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:48.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:48.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:49.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:49.439525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:49.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:49.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:49.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.439605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:50.939264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:50.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:50.954020 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:50:51.020279 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:50:51.020323 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.020343 1842604 retry.go:31] will retry after 13.418822402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:50:51.440005 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.440079 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.440420 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:51.440473 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:51.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:51.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:51.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:52.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:52.939608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.439380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.439660 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:53.939181 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:53.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:53.939557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:53.939608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:54.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.439246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:54.939220 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:54.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:54.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.439239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:55.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:55.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:55.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:55.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:56.439315 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:56.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:56.939298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:56.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:57.939434 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:57.939515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:57.939851 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:50:57.939910 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:50:58.439684 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.439750 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.440021 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:58.939803 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:58.939878 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:58.940159 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.440068 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.440142 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.440488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:50:59.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:50:59.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:50:59.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:00.439702 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:00.939370 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:00.939452 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:00.939786 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.070030 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:01.132180 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:01.132233 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.132254 1842604 retry.go:31] will retry after 31.549707812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:01.439619 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.439687 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.439937 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:01.939692 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:01.939769 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:01.940100 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:02.439919 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.439992 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.440273 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:02.440317 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:02.939603 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:02.939682 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:02.939996 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.439752 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.439849 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.440174 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:03.939981 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:03.940061 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:03.940401 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.439336 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:04.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:04.517286 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:04.517332 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.517352 1842604 retry.go:31] will retry after 44.886251271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:04.939909 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:04.939982 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:04.940274 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:04.940323 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:05.440102 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.440174 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.440508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:05.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:05.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:05.939577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.439220 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:06.939309 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:06.939386 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:06.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:07.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:07.439570 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:07.939211 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:07.939296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:07.939674 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.439772 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.439857 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.440214 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:08.939551 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:08.939621 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:08.939875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:09.439806 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.439883 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.440225 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:09.440285 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:09.939882 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:09.939963 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:09.940293 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.439792 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.439859 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.440124 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:10.939880 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:10.939951 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:10.940239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:11.439938 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.440017 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.440352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:11.440408 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:11.939683 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:11.939756 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:11.940083 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.439886 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.439960 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.440334 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:12.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:12.939160 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:12.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:13.939264 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:13.939339 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:13.939705 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:13.939773 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:14.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.439283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:14.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:14.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:14.939497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:15.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:15.939311 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:15.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:16.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:16.439517 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:16.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:16.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:16.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.439317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.439673 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:17.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:17.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:17.939571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:18.439237 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.439312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.439647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:18.439701 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:18.939233 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:18.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:18.939653 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.439515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:19.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:19.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:19.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:20.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:20.939246 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:20.939513 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:20.939554 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:21.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.439595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:21.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:21.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:21.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.439160 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:22.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:22.939658 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:22.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:23.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:23.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:23.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:23.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.439173 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.439604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:24.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:24.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:25.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.439524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:25.439565 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:25.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:25.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:25.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.439242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.439536 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:26.939226 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:26.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:26.939651 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:27.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:27.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:27.939251 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:27.939341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:27.939686 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.439565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:28.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:28.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:28.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.439293 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:29.939142 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:29.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:29.939475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:29.939523 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:30.439207 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.439302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.439638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:30.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:30.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:30.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.439467 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:31.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:31.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:31.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:31.939621 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:32.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:32.683088 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:51:32.746941 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:32.746981 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.747001 1842604 retry.go:31] will retry after 33.271174209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 02:51:32.939435 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:32.939505 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:32.939763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.439398 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.439739 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:33.939479 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:33.939568 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:33.939898 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:33.939952 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:34.439808 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.439880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.440131 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:34.939954 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:34.940033 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:34.940352 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.439094 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.439177 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.439525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:35.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:35.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:35.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:36.439299 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.439392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.439763 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:36.439816 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:36.939492 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:36.939567 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:36.939939 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.439667 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.439746 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.440054 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:37.939849 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:37.939922 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:37.940282 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.439087 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.439168 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.439498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:38.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:38.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:38.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:38.939564 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:39.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:39.939340 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:39.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:39.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.439520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:40.939185 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:40.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:40.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:40.939662 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:41.439347 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.439746 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:41.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:41.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:41.939550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:42.939307 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:42.939384 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:42.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:42.939786 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:43.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.439208 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:43.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:43.939249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:43.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:44.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:44.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:44.939486 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:45.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.439295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.439645 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:45.439707 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:45.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:45.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:45.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:46.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:46.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:46.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:47.439327 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.439404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.439724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:47.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:47.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:47.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:47.939484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.439200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.439273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.439629 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:48.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:48.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:48.939613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:49.404519 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:51:49.440015 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.440083 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.440326 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:49.440365 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:49.475362 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475396 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:51:49.475480 1842604 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:51:49.939258 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:49.939331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:49.939684 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.439273 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.439350 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:50.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:50.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:50.939510 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:51.939329 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:51.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:51.939742 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:51.939799 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:52.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.439527 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:52.939172 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:52.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:52.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.439284 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.439357 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.439683 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:53.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:53.939222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:53.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:54.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.439342 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.439675 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:54.439728 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:54.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:54.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:54.939602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.439136 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.439455 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:55.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:55.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:56.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:56.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:56.939488 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:56.939531 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:57.439216 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.439615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:57.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:57.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:57.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.439522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:58.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:58.939289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:58.939635 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:51:58.939699 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:51:59.439184 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.439571 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:51:59.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:51:59.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:51:59.939462 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.439330 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.439711 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:00.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:00.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:00.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:01.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.439492 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:01.439533 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:01.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:01.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:01.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.439326 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.439405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.439723 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:02.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:02.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:02.939547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:03.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:03.439656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:03.939333 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:03.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:03.939982 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:04.439870 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.439950 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.441104 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 02:52:04.939902 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:04.939976 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:04.940342 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:05.439974 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.440051 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.440366 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:05.440422 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:05.939070 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:05.939144 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:05.939427 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.018765 1842604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:52:06.088979 1842604 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089023 1842604 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 02:52:06.089108 1842604 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 02:52:06.092116 1842604 out.go:179] * Enabled addons: 
	I1216 02:52:06.094111 1842604 addons.go:530] duration metric: took 1m58.134103468s for enable addons: enabled=[]
	I1216 02:52:06.439418 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.439511 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.439875 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:06.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:06.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:06.939605 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.439870 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:07.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:07.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:07.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:07.939631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:08.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.439668 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:08.939150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:08.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:08.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:09.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:09.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:09.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:09.939712 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:10.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:10.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:10.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:10.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.439325 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.439401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.439727 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:11.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:11.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:11.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:12.439170 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:12.439614 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:12.939193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:12.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:12.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:13.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:13.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:13.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:14.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.439618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:14.439675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:14.939160 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:14.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:14.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:15.939166 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:15.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:15.939565 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:16.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:16.439751 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:16.939165 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:16.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:16.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.439294 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:17.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:17.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:17.939474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:18.439316 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.439414 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.439810 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:18.439875 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:18.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:18.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:18.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.439217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.439474 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:19.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:19.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:19.939594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.439630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:20.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:20.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:20.939522 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:20.939572 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:21.439266 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.439341 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:21.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:21.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:21.939580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.439150 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.439532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:22.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:22.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:22.939572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:22.939619 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:23.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:23.939140 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:23.939214 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:23.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.439648 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:24.939341 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:24.939424 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:24.939745 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:24.939803 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:25.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.439471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:25.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:25.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.439275 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:26.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:27.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.439562 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:27.439608 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:27.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:27.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:27.939607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:28.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:28.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:29.439248 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.439328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.439689 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:29.439742 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:29.939139 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:29.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:29.939549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.439178 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:30.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:30.939397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:30.939726 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.439487 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:31.939215 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:31.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:31.939632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:31.939687 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:32.439388 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.439773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:32.939162 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:32.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:32.939546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.439306 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:33.939270 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:33.939356 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:33.939716 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:33.939787 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:34.439540 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.439618 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:34.939192 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:34.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:34.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:35.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:35.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:35.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:36.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.439305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:36.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:36.939319 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:36.939408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:36.939776 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.439148 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.439547 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:37.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:37.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:37.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:38.439553 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.439631 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.439986 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:38.440048 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:38.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:38.939530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:39.939328 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:39.939405 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:39.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.439560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:40.939164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:40.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:40.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:40.939636 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:41.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.439262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:41.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:41.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.439574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:42.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:42.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:42.939650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:42.939708 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:43.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:43.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:43.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:43.939585 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.439396 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.439479 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.439801 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:44.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:44.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:45.439245 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.439326 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.439625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:45.439674 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:45.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:45.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:45.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.439229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:46.939135 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:46.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:46.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:47.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:47.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:47.939573 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:47.939618 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:48.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.439619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:48.939338 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:48.939416 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:48.939771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.439504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:49.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:49.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:49.939679 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:50.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.439438 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:50.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:50.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:50.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.439257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:51.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:51.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:51.939737 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:51.939789 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:52.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:52.939183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:52.939262 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:52.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.439235 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.439314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.439636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:53.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:53.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:53.939491 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:54.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.439271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.439607 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:54.439659 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:54.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:54.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:54.939623 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.439546 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:55.939182 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:55.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:55.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:56.439309 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.439391 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:56.439777 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:56.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:56.939267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:56.939545 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:57.939191 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:57.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:57.939589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:58.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:58.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:58.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:52:58.939617 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:52:59.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.439276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:52:59.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:52:59.939212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:52:59.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.439229 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.439331 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.439692 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:00.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:00.939273 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:00.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:00.939649 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:01.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.439476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:01.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:01.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:01.939638 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.439351 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.439735 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:02.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:02.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:02.939471 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:03.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.439264 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:03.439650 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:03.939321 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:03.939395 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:03.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.439219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.439542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:04.939207 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:04.939283 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:04.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:05.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:05.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:05.939477 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:05.939529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:06.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.439240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:06.939290 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:06.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:06.939724 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.439508 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:07.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:07.939335 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:07.939685 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:07.939739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:08.439494 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.439903 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:08.939686 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:08.939764 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:08.940063 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.440032 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.440108 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.440422 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:09.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:09.939254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:09.939592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:10.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:10.439589 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:10.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:10.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:10.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.439598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:11.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:11.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:11.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:12.439168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.439584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:12.439639 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:12.939298 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:12.939374 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:12.939740 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.439187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:13.939189 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:13.939265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:13.939586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:14.439387 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.439472 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.439812 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:14.439867 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:14.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:14.939501 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.439282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:15.939227 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:15.939303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:15.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:16.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:16.939242 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:16.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:16.939656 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:17.439323 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.439397 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.439744 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:17.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:17.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:17.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.439237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.439576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:18.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:18.939420 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:18.939765 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:18.939822 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:19.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.439538 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:19.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:19.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:19.939553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.439182 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:20.939144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:20.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:20.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:21.439165 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.439583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:21.439640 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:21.939297 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:21.939373 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:21.939731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.439207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:22.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:22.939299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:22.939630 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:23.439346 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.439421 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.439771 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:23.439824 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:23.939143 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:23.939483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.439299 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.439632 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:24.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:24.939319 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:24.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.439591 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:25.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:25.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:25.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:25.939638 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:26.439190 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.439266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.439592 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:26.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:26.939259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:26.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.439601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:27.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:27.939270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:27.939597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:28.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.439216 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:28.439529 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:28.939176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:28.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:28.939576 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.439313 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.439393 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.439725 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:29.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:29.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:29.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:30.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.439600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:30.439655 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:30.939320 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:30.939418 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:30.939784 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.439475 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:31.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:31.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:31.939534 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:32.439224 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.439297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.439620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:32.439676 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:32.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:32.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:32.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.439217 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.439580 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:33.939225 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:33.939316 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:33.939624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.439245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.439564 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:34.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:34.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:34.939620 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:34.939677 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:35.439212 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:35.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:35.939263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:35.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.439228 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:36.939277 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:36.939377 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:36.939732 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:36.939794 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:37.439162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.439537 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:37.939236 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:37.939314 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:37.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.439417 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.439490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.439842 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:38.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:38.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:38.939518 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:39.439409 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.439482 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.439830 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:39.439883 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:39.939560 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:39.939640 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:39.939973 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.439727 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.439800 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.440066 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:40.939896 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:40.939970 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:40.940284 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:41.440124 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.440201 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.440497 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:41.440543 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:41.943162 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:41.943240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:41.943561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.439267 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.439351 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:42.939415 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:42.939490 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:42.939836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.439215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.439499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:43.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:43.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:43.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:43.939661 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:44.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:44.939247 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:44.939347 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:44.939662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.439436 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.439789 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:45.939376 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:45.939453 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:45.939756 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:45.939804 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:46.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.439210 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.439473 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:46.939145 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:46.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:46.939525 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.439589 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:47.939235 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:47.939329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:47.939704 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:48.439754 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.439848 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.440188 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:48.440247 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:48.939992 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:48.940067 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:48.940363 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.439133 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.439200 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:49.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:49.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:49.939646 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.439384 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.439476 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.439821 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:50.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:50.939220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:50.939481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:50.939521 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:51.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.439279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:51.939222 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:51.939309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:51.939656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.439241 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.439313 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.439612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:52.939395 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:52.939474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:52.939857 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:52.939915 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:53.439594 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.439672 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.440023 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:53.939720 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:53.939790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:53.940050 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.440097 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.440176 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.440526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:54.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:54.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:54.939606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:55.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.439505 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:55.439556 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:55.939269 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:55.939344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:55.939702 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.439437 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.439519 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.439881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:56.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:56.939236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:56.939498 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:57.439223 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.439303 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.439680 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:57.439739 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:53:57.939423 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:57.939507 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:57.939912 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.439813 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.439885 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.440183 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:58.940058 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:58.940134 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:58.940478 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.439567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:53:59.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:53:59.939219 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:53:59.939485 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:53:59.939525 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:00.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:00.939178 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:00.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:00.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.439145 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.439220 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.439472 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:01.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:01.939266 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:01.939561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:01.939606 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:02.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:02.939161 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:02.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:02.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.439234 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.439344 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.439699 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:03.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:03.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:03.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.439222 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.444773 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1216 02:54:04.444839 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:04.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:04.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:04.939599 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.439171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.439587 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:05.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:05.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:05.939528 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.439610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:06.939204 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:06.939281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:06.939621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:06.939675 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:07.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.439235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:07.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:07.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:07.939834 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.439782 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.439866 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.440217 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:08.939622 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:08.939691 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:08.939951 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:08.939990 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:09.439901 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.439984 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.440340 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:09.940011 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:09.940084 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:09.940412 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.440017 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.440085 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.440369 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:10.939087 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:10.939163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:10.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:11.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.439309 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.439891 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:11.439947 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:11.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:11.939684 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:11.939946 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.439716 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.439790 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.440122 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:12.939802 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:12.939880 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:12.940191 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:13.439954 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.440025 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.440286 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:13.440326 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:13.939084 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:13.939162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:13.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.439289 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.439375 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.439670 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:14.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:14.939235 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:14.939560 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.439219 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.439296 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.439631 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:15.939184 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:15.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:15.939593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:15.939646 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:16.439155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.439539 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:16.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:16.939234 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:16.939531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.439208 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.439642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:17.939171 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:17.939240 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:17.939532 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:18.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.439286 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.439581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:18.439631 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:18.939322 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:18.939396 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:18.939736 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.439224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.439493 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:19.939315 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:19.939392 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:19.939729 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:20.439464 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.439552 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.439931 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:20.439986 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:20.939676 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:20.939743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:20.940000 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.439750 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.439826 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.440155 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:21.939838 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:21.939912 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:21.940244 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:22.439927 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.440003 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.440320 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:22.440370 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:22.940106 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:22.940183 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:22.940523 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.439603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:23.939141 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:23.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:23.939482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.439339 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.439743 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:24.939456 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:24.939535 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:24.939838 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:24.939885 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:25.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.439298 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:25.939177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:25.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:25.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.439303 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:26.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:26.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:26.939504 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:27.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:27.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:27.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:27.939258 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:27.939582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.439179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.439531 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:28.939224 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:28.939310 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:28.939682 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:29.439497 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.439576 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.439876 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:29.439924 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:29.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:29.939230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:29.939551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.439291 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:30.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:30.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:30.939636 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:31.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:31.939364 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:31.939698 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:31.939756 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:32.439434 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.439515 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.439885 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:32.939146 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:32.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:32.939520 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.439177 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.439263 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:33.939325 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:33.939412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:33.939761 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:33.939818 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:34.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.439568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:34.939156 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:34.939238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:34.939595 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.439183 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.439639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:35.939163 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:35.939247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:35.939616 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:36.439210 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.439644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:36.439706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:36.939218 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:36.939300 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:36.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.439159 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.439230 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:37.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:37.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:37.939615 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:38.439345 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.439428 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.439811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:38.439872 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:38.939200 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:38.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:38.939515 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.439608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:39.939305 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:39.939380 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:39.939747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.439481 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:40.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:40.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:40.939568 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:40.939620 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:41.439300 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.439383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.439721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:41.939147 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:41.939218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:41.939535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.439247 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.439561 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:42.939283 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:42.939360 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:42.939717 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:42.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:43.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.439232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.439482 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:43.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:43.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:43.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.439181 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.439256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.439582 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:44.939158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:44.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:44.939503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:45.439421 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.439558 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.440302 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:45.440450 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:45.939217 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:45.939295 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:45.939642 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.439228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:46.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:46.939250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:46.939578 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.439143 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.439218 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.439549 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:47.940079 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:47.940162 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:47.940421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:47.940463 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.439275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.439617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:48.939330 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:48.939404 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:48.939700 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.439154 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.439238 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.439553 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:49.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:49.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:49.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:50.439334 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.439719 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:50.439774 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:50.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:50.939228 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:50.939548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.439191 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.439563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:51.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:51.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:51.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.439158 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.439503 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:52.939205 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:52.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:52.939612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:52.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:53.439238 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.439324 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:53.939199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:53.939275 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:53.939529 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.439450 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.439530 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.439887 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:54.939607 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:54.939685 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:54.940014 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:54.940065 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:55.439530 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.439603 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.439907 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:55.939195 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:55.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:55.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.439221 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.439329 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.439662 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:56.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:56.939256 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:56.939519 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:57.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.439269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.439641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:57.439700 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:54:57.939371 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:57.939446 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:57.939781 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.439185 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.439254 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.439586 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:58.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:58.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:58.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.439281 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.439643 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:54:59.939188 1842604 type.go:168] "Request Body" body=""
	I1216 02:54:59.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:54:59.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:54:59.939684 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:00.439213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.439308 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.439758 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:00.939212 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:00.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:00.939641 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:01.939203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:01.939279 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:01.939706 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:02.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.439463 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.439808 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:02.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:02.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:02.939574 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.439203 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.439285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.439626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:03.939179 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:03.939260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:03.939584 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:04.439152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.439530 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:04.439579 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:04.939230 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:04.939305 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:04.939672 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.439280 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.439358 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.439751 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:05.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:05.939215 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:05.939466 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:06.439161 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.439243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:06.439630 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:06.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:06.939272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:06.939644 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.439279 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.439353 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.439621 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:07.939358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:07.939442 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:07.939811 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:08.439810 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.439889 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.440239 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:08.440291 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:08.939609 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:08.939681 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:08.939929 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.439800 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.439881 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.440208 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:09.939526 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:09.939604 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:09.939943 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.439719 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.439792 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.440067 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:10.939812 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:10.939897 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:10.940243 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:10.940297 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:11.440018 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.440100 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.440421 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:11.939675 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:11.939759 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:11.940020 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.439848 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.439919 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.440237 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:12.940077 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:12.940153 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:12.940509 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:12.940583 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:13.439151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.439221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.439490 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:13.939194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:13.939268 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:13.939610 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.439189 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.439265 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.439597 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:14.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:14.939229 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:14.939566 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:15.439227 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.439663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:15.439716 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:15.939170 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:15.939245 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:15.939569 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.439197 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.439270 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.439533 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:16.939168 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:16.939252 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:16.939603 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:17.439324 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.439408 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.439747 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:17.439815 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:17.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:17.939232 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:17.939524 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.439650 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:18.939201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:18.939285 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:18.939628 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.439146 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.439468 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:19.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:19.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:19.939563 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:19.939616 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:20.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.439259 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.439548 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:20.939137 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:20.939207 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:20.939465 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.439204 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.439287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.439657 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:21.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:21.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:21.939664 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:21.939729 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:22.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.439225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.439535 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:22.939190 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:22.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:22.939611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.439305 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.439389 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.439720 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:23.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:23.939226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:23.939542 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:24.439195 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.439274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.439594 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:24.439651 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:24.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:24.939255 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:24.939598 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.439138 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.439211 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.439463 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:25.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:25.939244 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:25.939581 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.439089 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.439163 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.439495 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:26.939152 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:26.939224 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:26.939479 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:26.939524 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:27.439214 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.439294 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.439661 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:27.939373 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:27.939451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:27.939805 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.439176 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.439250 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.439552 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:28.939221 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:28.939302 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:28.939614 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:28.939667 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:29.439402 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.439474 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.439787 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:29.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:29.939223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:29.939489 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.439192 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.439272 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.439596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:30.939323 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:30.939401 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:30.939721 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:30.939776 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:31.439153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.439500 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:31.939196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:31.939328 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:31.939663 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.439376 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.439451 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.439836 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:32.939913 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:32.939990 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:32.940359 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:32.940407 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:33.439139 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.439212 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:33.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:33.939287 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:33.939609 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.439167 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.439249 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.439588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:34.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:34.939277 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:34.939600 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:35.439211 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.439288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.439656 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:35.439710 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:35.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:35.939227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:35.939499 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.439251 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.439593 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:36.939187 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:36.939271 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:36.939625 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.439164 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.439236 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:37.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:37.939257 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:37.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:37.939669 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:38.439358 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.439429 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.439750 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:38.939148 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:38.939225 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:38.939476 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.439218 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.439301 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.439681 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:39.939198 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:39.939276 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:39.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:40.439149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.439231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.439559 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:40.439611 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:40.939175 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:40.939248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:40.939558 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.439196 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.439280 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.439613 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:41.939240 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:41.939327 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:41.939647 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:42.439206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.439289 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.439611 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:42.439668 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:42.939362 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:42.939437 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:42.939760 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.439144 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.439213 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.439457 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:43.939186 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:43.939269 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:43.939604 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:44.439226 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.439307 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.439640 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:44.439691 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:44.939151 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:44.939221 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:44.939480 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.439172 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.439572 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:45.939300 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:45.939383 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:45.939706 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.439147 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.439223 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.439483 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:46.939155 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:46.939237 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:46.939588 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:46.939647 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:47.439293 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.439366 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.439696 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:47.939149 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:47.939217 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:47.939541 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.439194 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.439267 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.439577 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:48.939238 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:48.939317 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:48.939619 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:49.439174 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.439248 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.439543 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:49.439584 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:49.939154 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:49.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:49.939567 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.439205 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.439290 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.439624 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:50.943201 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:50.943274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:50.943612 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:51.439328 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.439412 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.439731 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:51.439790 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:51.939213 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:51.939297 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:51.939626 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.439199 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.439278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.439526 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:52.939206 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:52.939282 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:52.939601 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.439209 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.439284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:53.939157 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:53.939231 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:53.939494 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:53.939534 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:54.439193 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.439304 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.439606 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:54.939208 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:54.939284 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:54.939622 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.439163 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.439233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.439516 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:55.939231 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:55.939312 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:55.939665 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:55.939722 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:56.439386 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.439461 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.439770 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:56.939153 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:56.939233 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:56.939590 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.439287 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.439372 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.439752 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:57.939469 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:57.939545 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:57.939881 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:55:57.939934 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:55:58.439663 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.439743 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.440003 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:58.939830 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:58.939903 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:58.940228 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.439135 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.439209 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.439557 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:55:59.939159 1842604 type.go:168] "Request Body" body=""
	I1216 02:55:59.939239 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:55:59.939583 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:00.439256 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.439337 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.439709 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:00.439775 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:00.939210 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:00.939292 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:00.939617 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.439157 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.439227 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.439551 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:01.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:01.939278 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:01.939639 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.439186 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.439261 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.439602 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:02.939180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:02.939253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:02.939512 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:02.939560 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:03.439175 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.439253 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.439540 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:03.939169 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:03.939243 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:03.939596 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.439156 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.439226 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.439484 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:04.939202 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:04.939288 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:04.939867 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:04.939916 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:05.439564 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.439639 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.439983 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:05.939766 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:05.939842 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:05.940108 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.439869 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.439941 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.440295 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:06.940114 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:06.940198 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:06.940608 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 02:56:06.940665 1842604 node_ready.go:55] error getting node "functional-389759" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389759": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 02:56:07.439180 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.439260 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.439550 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:07.939197 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:07.939274 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:07.939618 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.439510 1842604 type.go:168] "Request Body" body=""
	I1216 02:56:08.439620 1842604 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389759" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 02:56:08.440275 1842604 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 02:56:08.939119 1842604 node_ready.go:38] duration metric: took 6m0.000151723s for node "functional-389759" to be "Ready" ...
	I1216 02:56:08.942443 1842604 out.go:203] 
	W1216 02:56:08.945313 1842604 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 02:56:08.945510 1842604 out.go:285] * 
	W1216 02:56:08.947818 1842604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 02:56:08.950773 1842604 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:56:16 functional-389759 containerd[5252]: time="2025-12-16T02:56:16.500864545Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:17 functional-389759 containerd[5252]: time="2025-12-16T02:56:17.584917119Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 16 02:56:17 functional-389759 containerd[5252]: time="2025-12-16T02:56:17.586986759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 16 02:56:17 functional-389759 containerd[5252]: time="2025-12-16T02:56:17.595442921Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:17 functional-389759 containerd[5252]: time="2025-12-16T02:56:17.596292030Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:18 functional-389759 containerd[5252]: time="2025-12-16T02:56:18.548403756Z" level=info msg="No images store for sha256:a041f367bbd28fc7382515453bcff124b359f5670c41bb859c301862fece5890"
	Dec 16 02:56:18 functional-389759 containerd[5252]: time="2025-12-16T02:56:18.550521214Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-389759\""
	Dec 16 02:56:18 functional-389759 containerd[5252]: time="2025-12-16T02:56:18.558262379Z" level=info msg="ImageCreate event name:\"sha256:106be42faa4dd4147343df978553d005570a8f7212a8499769ed94b75df65cdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:18 functional-389759 containerd[5252]: time="2025-12-16T02:56:18.558804072Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-389759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:19 functional-389759 containerd[5252]: time="2025-12-16T02:56:19.349318710Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 16 02:56:19 functional-389759 containerd[5252]: time="2025-12-16T02:56:19.351872443Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 16 02:56:19 functional-389759 containerd[5252]: time="2025-12-16T02:56:19.353850252Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 16 02:56:19 functional-389759 containerd[5252]: time="2025-12-16T02:56:19.366018149Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.343034268Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.345466444Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.347430945Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.355266975Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.519415516Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.521760604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.531285366Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.531633659Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.652877251Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.654971957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.662261093Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 02:56:20 functional-389759 containerd[5252]: time="2025-12-16T02:56:20.663564330Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:56:24.790581    9369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:24.791022    9369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:24.792203    9369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:24.792753    9369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:56:24.794468    9369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:56:24 up  8:38,  0 user,  load average: 0.92, 0.42, 0.85
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 02:56:21 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:22 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 16 02:56:22 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:22 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:22 functional-389759 kubelet[9231]: E1216 02:56:22.494255    9231 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:22 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:22 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:23 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 16 02:56:23 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:23 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:23 functional-389759 kubelet[9246]: E1216 02:56:23.247530    9246 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:23 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:23 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:23 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 16 02:56:23 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:23 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:24 functional-389759 kubelet[9281]: E1216 02:56:24.000952    9281 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:24 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:24 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 02:56:24 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 16 02:56:24 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:24 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 02:56:24 functional-389759 kubelet[9360]: E1216 02:56:24.744165    9360 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 02:56:24 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 02:56:24 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (425.306047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (732.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389759 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 02:58:51.136589 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:00:51.130848 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:02:14.206466 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:03:51.139395 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:05:51.136294 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389759 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m10.756679342s)

                                                
                                                
-- stdout --
	* [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005338087s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-389759 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m10.757865139s for "functional-389759" cluster.
I1216 03:08:36.557261 1798370 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (297.01286ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format table --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls                                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ delete         │ -p functional-853651                                                                                                                                    │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-389759 --alsologtostderr -v=8                                                                                                             │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:50 UTC │                     │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:latest                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add minikube-local-cache-test:functional-389759                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache delete minikube-local-cache-test:functional-389759                                                                              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl images                                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	│ cache          │ functional-389759 cache reload                                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ kubectl        │ functional-389759 kubectl -- --context functional-389759 get pods                                                                                       │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	│ start          │ -p functional-389759 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:56:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:56:25.844373 1848358 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:56:25.844466 1848358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:56:25.844470 1848358 out.go:374] Setting ErrFile to fd 2...
	I1216 02:56:25.844474 1848358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:56:25.844836 1848358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:56:25.845570 1848358 out.go:368] Setting JSON to false
	I1216 02:56:25.846389 1848358 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":31130,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:56:25.846449 1848358 start.go:143] virtualization:  
	I1216 02:56:25.849867 1848358 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:56:25.854549 1848358 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:56:25.854652 1848358 notify.go:221] Checking for updates...
	I1216 02:56:25.860318 1848358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:56:25.863452 1848358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:56:25.866454 1848358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:56:25.869328 1848358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:56:25.872192 1848358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:56:25.875771 1848358 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:56:25.875865 1848358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:56:25.910877 1848358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:56:25.910989 1848358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:56:25.979751 1848358 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 02:56:25.969640801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:56:25.979847 1848358 docker.go:319] overlay module found
	I1216 02:56:25.984585 1848358 out.go:179] * Using the docker driver based on existing profile
	I1216 02:56:25.987331 1848358 start.go:309] selected driver: docker
	I1216 02:56:25.987339 1848358 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:25.987425 1848358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:56:25.987525 1848358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:56:26.045497 1848358 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 02:56:26.035789712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:56:26.045925 1848358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:56:26.045948 1848358 cni.go:84] Creating CNI manager for ""
	I1216 02:56:26.045996 1848358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:56:26.046044 1848358 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:26.049158 1848358 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:56:26.052095 1848358 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:56:26.055176 1848358 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:56:26.058088 1848358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:56:26.058108 1848358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:56:26.058178 1848358 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:56:26.058195 1848358 cache.go:65] Caching tarball of preloaded images
	I1216 02:56:26.058305 1848358 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:56:26.058312 1848358 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:56:26.058447 1848358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:56:26.078911 1848358 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:56:26.078923 1848358 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:56:26.078944 1848358 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:56:26.078984 1848358 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:56:26.079085 1848358 start.go:364] duration metric: took 83.453µs to acquireMachinesLock for "functional-389759"
	I1216 02:56:26.079107 1848358 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:56:26.079112 1848358 fix.go:54] fixHost starting: 
	I1216 02:56:26.079431 1848358 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:56:26.097178 1848358 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:56:26.097205 1848358 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:56:26.100419 1848358 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:56:26.100450 1848358 machine.go:94] provisionDockerMachine start ...
	I1216 02:56:26.100545 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.118508 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.118832 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.118839 1848358 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:56:26.259148 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:56:26.259164 1848358 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:56:26.259234 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.277500 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.277820 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.277829 1848358 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:56:26.421165 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:56:26.421257 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.440349 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.440644 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.440657 1848358 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:56:26.579508 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:56:26.579533 1848358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:56:26.579555 1848358 ubuntu.go:190] setting up certificates
	I1216 02:56:26.579573 1848358 provision.go:84] configureAuth start
	I1216 02:56:26.579642 1848358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:56:26.598860 1848358 provision.go:143] copyHostCerts
	I1216 02:56:26.598936 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:56:26.598944 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:56:26.599024 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:56:26.599152 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:56:26.599157 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:56:26.599183 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:56:26.599298 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:56:26.599302 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:56:26.599329 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:56:26.599373 1848358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:56:26.772331 1848358 provision.go:177] copyRemoteCerts
	I1216 02:56:26.772384 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:56:26.772421 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.790833 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:26.886672 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:56:26.903453 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:56:26.920711 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 02:56:26.938516 1848358 provision.go:87] duration metric: took 358.921052ms to configureAuth
	I1216 02:56:26.938533 1848358 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:56:26.938730 1848358 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:56:26.938735 1848358 machine.go:97] duration metric: took 838.281264ms to provisionDockerMachine
	I1216 02:56:26.938741 1848358 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:56:26.938751 1848358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:56:26.938797 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:56:26.938840 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.957601 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.062997 1848358 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:56:27.066589 1848358 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:56:27.066608 1848358 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:56:27.066618 1848358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:56:27.066672 1848358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:56:27.066743 1848358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:56:27.066818 1848358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:56:27.066859 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:56:27.074143 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:56:27.091762 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:56:27.109760 1848358 start.go:296] duration metric: took 171.004929ms for postStartSetup
	I1216 02:56:27.109845 1848358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:56:27.109892 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.130041 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.224282 1848358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:56:27.229295 1848358 fix.go:56] duration metric: took 1.150175721s for fixHost
	I1216 02:56:27.229312 1848358 start.go:83] releasing machines lock for "functional-389759", held for 1.150220136s
	I1216 02:56:27.229388 1848358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:56:27.246922 1848358 ssh_runner.go:195] Run: cat /version.json
	I1216 02:56:27.246974 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.247232 1848358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:56:27.247302 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.269086 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.280897 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.370924 1848358 ssh_runner.go:195] Run: systemctl --version
	I1216 02:56:27.469438 1848358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:56:27.474082 1848358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:56:27.474143 1848358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:56:27.482716 1848358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:56:27.482730 1848358 start.go:496] detecting cgroup driver to use...
	I1216 02:56:27.482760 1848358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:56:27.482821 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:56:27.499295 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:56:27.512730 1848358 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:56:27.512788 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:56:27.529084 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:56:27.542618 1848358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:56:27.669326 1848358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:56:27.809661 1848358 docker.go:234] disabling docker service ...
	I1216 02:56:27.809726 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:56:27.825238 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:56:27.839007 1848358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:56:27.961490 1848358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:56:28.085730 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:56:28.099793 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:56:28.115219 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:56:28.124904 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:56:28.134481 1848358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:56:28.134543 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:56:28.143714 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:56:28.152978 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:56:28.161801 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:56:28.170944 1848358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:56:28.179475 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:56:28.188723 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:56:28.197979 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:56:28.206949 1848358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:56:28.214520 1848358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:56:28.222338 1848358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:56:28.339529 1848358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:56:28.517809 1848358 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:56:28.517866 1848358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:56:28.522881 1848358 start.go:564] Will wait 60s for crictl version
	I1216 02:56:28.522937 1848358 ssh_runner.go:195] Run: which crictl
	I1216 02:56:28.526562 1848358 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:56:28.550167 1848358 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:56:28.550234 1848358 ssh_runner.go:195] Run: containerd --version
	I1216 02:56:28.570328 1848358 ssh_runner.go:195] Run: containerd --version
	I1216 02:56:28.596807 1848358 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:56:28.599682 1848358 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:56:28.616323 1848358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:56:28.623466 1848358 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 02:56:28.626293 1848358 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:56:28.626428 1848358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:56:28.626509 1848358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:56:28.651243 1848358 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:56:28.651255 1848358 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:56:28.651317 1848358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:56:28.676192 1848358 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:56:28.676203 1848358 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:56:28.676209 1848358 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:56:28.676312 1848358 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:56:28.676373 1848358 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:56:28.700239 1848358 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 02:56:28.700256 1848358 cni.go:84] Creating CNI manager for ""
	I1216 02:56:28.700264 1848358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:56:28.700272 1848358 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:56:28.700294 1848358 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:56:28.700400 1848358 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:56:28.700473 1848358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:56:28.708593 1848358 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:56:28.708655 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:56:28.716199 1848358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:56:28.728994 1848358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:56:28.742129 1848358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1216 02:56:28.754916 1848358 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:56:28.758765 1848358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:56:28.878289 1848358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:56:29.187922 1848358 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:56:29.187939 1848358 certs.go:195] generating shared ca certs ...
	I1216 02:56:29.187954 1848358 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:56:29.188132 1848358 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:56:29.188175 1848358 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:56:29.188182 1848358 certs.go:257] generating profile certs ...
	I1216 02:56:29.188282 1848358 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:56:29.188344 1848358 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:56:29.188398 1848358 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:56:29.188534 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:56:29.188573 1848358 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:56:29.188580 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:56:29.188615 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:56:29.188648 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:56:29.188671 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:56:29.188729 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:56:29.189416 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:56:29.212546 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:56:29.235562 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:56:29.257334 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:56:29.278410 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:56:29.297639 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:56:29.316055 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:56:29.333992 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:56:29.351802 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:56:29.370197 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:56:29.388624 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:56:29.406325 1848358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:56:29.419477 1848358 ssh_runner.go:195] Run: openssl version
	I1216 02:56:29.425780 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.433488 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:56:29.440931 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.444594 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.444652 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.485312 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:56:29.492681 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.499838 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:56:29.507532 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.511555 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.511621 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.552382 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:56:29.559682 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.566808 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:56:29.574430 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.578016 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.578077 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.619735 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:56:29.627282 1848358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:56:29.630975 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:56:29.674022 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:56:29.716546 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:56:29.760378 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:56:29.801675 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:56:29.842471 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:56:29.883311 1848358 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:29.883412 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:56:29.883472 1848358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:56:29.910518 1848358 cri.go:89] found id: ""
	I1216 02:56:29.910580 1848358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:56:29.918530 1848358 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:56:29.918539 1848358 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:56:29.918590 1848358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:56:29.926051 1848358 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:29.926594 1848358 kubeconfig.go:125] found "functional-389759" server: "https://192.168.49.2:8441"
	I1216 02:56:29.927850 1848358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:56:29.937055 1848358 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 02:41:54.425829655 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 02:56:28.747941655 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 02:56:29.937066 1848358 kubeadm.go:1161] stopping kube-system containers ...
	I1216 02:56:29.937078 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1216 02:56:29.937140 1848358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:56:29.975717 1848358 cri.go:89] found id: ""
	I1216 02:56:29.975778 1848358 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 02:56:29.994835 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:56:30.004346 1848358 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 02:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 02:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 16 02:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 02:46 /etc/kubernetes/scheduler.conf
	
	I1216 02:56:30.004430 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 02:56:30.041702 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 02:56:30.052507 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.052569 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:56:30.061943 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 02:56:30.073420 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.073488 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:56:30.083069 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 02:56:30.092935 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.092994 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:56:30.101587 1848358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:56:30.114178 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:30.166214 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.346212 1848358 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179973709s)
	I1216 02:56:31.346269 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.548322 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.601050 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.649581 1848358 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:56:31.649669 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:32.150228 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:32.649839 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:33.149820 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:33.650613 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:34.150733 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:34.649773 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:35.150705 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:35.649751 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:36.150703 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:36.650627 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:37.150392 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:37.649857 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:38.150375 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:38.650600 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:39.150146 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:39.649848 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:40.150319 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:40.650732 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:41.150402 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:41.649922 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:42.150742 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:42.649781 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:43.150590 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:43.650502 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:44.149866 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:44.649912 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:45.150004 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:45.650501 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:46.149734 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:46.649745 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:47.150639 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:47.649826 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:48.150565 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:48.649896 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:49.149744 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:49.650628 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:50.149885 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:50.649789 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:51.150643 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:51.649902 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:52.149806 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:52.650451 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:53.150140 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:53.649767 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:54.150751 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:54.650468 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:55.149878 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:55.650629 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:56.150781 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:56.650556 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:57.149864 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:57.650766 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:58.150741 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:58.649892 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:59.150551 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:59.650283 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:00.150247 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:00.650607 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:01.150638 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:01.650253 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:02.149858 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:02.650117 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:03.149960 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:03.649720 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:04.150726 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:04.650425 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:05.149866 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:05.649851 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:06.150611 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:06.650200 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:07.150444 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:07.650600 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:08.149853 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:08.650701 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:09.150579 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:09.649862 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:10.149858 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:10.650393 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:11.150022 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:11.649819 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:12.150562 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:12.649775 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:13.150489 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:13.650396 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:14.149848 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:14.649998 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:15.149945 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:15.649800 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:16.149886 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:16.650049 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:17.149847 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:17.649836 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:18.149898 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:18.649853 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:19.149883 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:19.649825 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:20.149732 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:20.650204 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:21.149852 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:21.649824 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:22.150472 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:22.650452 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:23.150780 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:23.650556 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:24.149887 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:24.650458 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:25.150518 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:25.650351 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:26.149849 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:26.650701 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:27.150612 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:27.650232 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:28.150399 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:28.650537 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:29.150626 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:29.650514 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:30.150439 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:30.650333 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:31.149886 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:31.650315 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:31.650394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:31.674930 1848358 cri.go:89] found id: ""
	I1216 02:57:31.674944 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.674951 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:31.674956 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:31.675016 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:31.714000 1848358 cri.go:89] found id: ""
	I1216 02:57:31.714013 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.714021 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:31.714026 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:31.714086 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:31.747840 1848358 cri.go:89] found id: ""
	I1216 02:57:31.747854 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.747861 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:31.747866 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:31.747926 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:31.773860 1848358 cri.go:89] found id: ""
	I1216 02:57:31.773874 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.773886 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:31.773891 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:31.773953 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:31.802242 1848358 cri.go:89] found id: ""
	I1216 02:57:31.802256 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.802263 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:31.802268 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:31.802327 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:31.827140 1848358 cri.go:89] found id: ""
	I1216 02:57:31.827170 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.827177 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:31.827183 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:31.827250 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:31.851813 1848358 cri.go:89] found id: ""
	I1216 02:57:31.851827 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.851834 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:31.851841 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:31.851852 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:31.907296 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:31.907315 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:31.924742 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:31.924759 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:31.990670 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:31.980837   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.981269   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.984774   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.985315   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.986770   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:31.980837   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.981269   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.984774   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.985315   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.986770   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:31.990681 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:31.990692 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:32.056720 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:32.056741 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:34.586741 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:34.596594 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:34.596656 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:34.624415 1848358 cri.go:89] found id: ""
	I1216 02:57:34.624430 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.624437 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:34.624454 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:34.624529 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:34.648856 1848358 cri.go:89] found id: ""
	I1216 02:57:34.648877 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.648884 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:34.648889 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:34.648952 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:34.674838 1848358 cri.go:89] found id: ""
	I1216 02:57:34.674852 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.674859 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:34.674864 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:34.674938 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:34.720068 1848358 cri.go:89] found id: ""
	I1216 02:57:34.720082 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.720089 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:34.720093 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:34.720152 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:34.749510 1848358 cri.go:89] found id: ""
	I1216 02:57:34.749525 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.749531 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:34.749541 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:34.749603 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:34.776711 1848358 cri.go:89] found id: ""
	I1216 02:57:34.776725 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.776732 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:34.776737 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:34.776797 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:34.801539 1848358 cri.go:89] found id: ""
	I1216 02:57:34.801552 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.801560 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:34.801568 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:34.801578 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:34.857992 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:34.858012 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:34.876290 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:34.876307 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:34.948190 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:34.939256   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.940046   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.941775   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.942456   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.944096   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:34.939256   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.940046   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.941775   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.942456   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.944096   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:34.948202 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:34.948213 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:35.015139 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:35.015162 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:37.549752 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:37.560125 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:37.560194 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:37.585130 1848358 cri.go:89] found id: ""
	I1216 02:57:37.585144 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.585151 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:37.585156 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:37.585216 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:37.610009 1848358 cri.go:89] found id: ""
	I1216 02:57:37.610023 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.610030 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:37.610035 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:37.610096 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:37.635414 1848358 cri.go:89] found id: ""
	I1216 02:57:37.635429 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.635436 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:37.635441 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:37.635503 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:37.660026 1848358 cri.go:89] found id: ""
	I1216 02:57:37.660046 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.660053 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:37.660059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:37.660119 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:37.702568 1848358 cri.go:89] found id: ""
	I1216 02:57:37.702583 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.702590 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:37.702595 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:37.702659 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:37.735671 1848358 cri.go:89] found id: ""
	I1216 02:57:37.735685 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.735693 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:37.735698 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:37.735766 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:37.764451 1848358 cri.go:89] found id: ""
	I1216 02:57:37.764465 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.764472 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:37.764481 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:37.764492 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:37.781790 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:37.781808 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:37.850130 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:37.841387   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.841981   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.843649   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845020   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845734   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:37.841387   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.841981   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.843649   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845020   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845734   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:37.850150 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:37.850161 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:37.912286 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:37.912306 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:37.947545 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:37.947561 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:40.504032 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:40.514627 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:40.514689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:40.543498 1848358 cri.go:89] found id: ""
	I1216 02:57:40.543513 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.543520 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:40.543524 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:40.543593 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:40.568106 1848358 cri.go:89] found id: ""
	I1216 02:57:40.568120 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.568127 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:40.568132 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:40.568190 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:40.592290 1848358 cri.go:89] found id: ""
	I1216 02:57:40.592304 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.592317 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:40.592322 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:40.592382 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:40.617796 1848358 cri.go:89] found id: ""
	I1216 02:57:40.617811 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.617818 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:40.617823 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:40.617882 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:40.643710 1848358 cri.go:89] found id: ""
	I1216 02:57:40.643725 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.643732 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:40.643737 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:40.643811 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:40.672711 1848358 cri.go:89] found id: ""
	I1216 02:57:40.672731 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.672738 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:40.672743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:40.672802 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:40.704590 1848358 cri.go:89] found id: ""
	I1216 02:57:40.704604 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.704611 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:40.704620 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:40.704630 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:40.769622 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:40.769642 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:40.786992 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:40.787010 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:40.853579 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:40.844241   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.845343   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847164   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847823   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.849606   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:40.844241   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.845343   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847164   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847823   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.849606   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:40.853590 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:40.853600 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:40.915814 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:40.915833 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:43.448229 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:43.458340 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:43.458399 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:43.481954 1848358 cri.go:89] found id: ""
	I1216 02:57:43.481967 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.481974 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:43.481979 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:43.482037 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:43.507588 1848358 cri.go:89] found id: ""
	I1216 02:57:43.507603 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.507610 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:43.507614 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:43.507684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:43.533164 1848358 cri.go:89] found id: ""
	I1216 02:57:43.533179 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.533188 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:43.533193 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:43.533255 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:43.558139 1848358 cri.go:89] found id: ""
	I1216 02:57:43.558152 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.558159 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:43.558164 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:43.558221 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:43.587218 1848358 cri.go:89] found id: ""
	I1216 02:57:43.587244 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.587251 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:43.587256 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:43.587315 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:43.613584 1848358 cri.go:89] found id: ""
	I1216 02:57:43.613598 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.613605 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:43.613610 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:43.613691 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:43.645887 1848358 cri.go:89] found id: ""
	I1216 02:57:43.645901 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.645908 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:43.645916 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:43.645928 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:43.662557 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:43.662574 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:43.745017 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:43.735622   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.736621   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738304   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738872   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.740427   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:43.735622   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.736621   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738304   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738872   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.740427   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:43.745029 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:43.745040 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:43.808792 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:43.808811 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:43.837682 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:43.837698 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:46.396229 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:46.406230 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:46.406302 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:46.429707 1848358 cri.go:89] found id: ""
	I1216 02:57:46.429721 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.429728 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:46.429733 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:46.429796 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:46.454076 1848358 cri.go:89] found id: ""
	I1216 02:57:46.454090 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.454097 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:46.454101 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:46.454159 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:46.479472 1848358 cri.go:89] found id: ""
	I1216 02:57:46.479486 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.479493 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:46.479498 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:46.479557 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:46.505579 1848358 cri.go:89] found id: ""
	I1216 02:57:46.505592 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.505599 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:46.505605 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:46.505665 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:46.530373 1848358 cri.go:89] found id: ""
	I1216 02:57:46.530387 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.530394 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:46.530399 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:46.530464 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:46.554723 1848358 cri.go:89] found id: ""
	I1216 02:57:46.554736 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.554743 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:46.554748 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:46.554808 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:46.579147 1848358 cri.go:89] found id: ""
	I1216 02:57:46.579164 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.579171 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:46.579179 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:46.579189 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:46.634449 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:46.634473 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:46.651968 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:46.651988 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:46.739219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:46.722068   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.723633   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.732527   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.733244   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.734892   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:46.722068   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.723633   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.732527   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.733244   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.734892   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:46.739239 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:46.739250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:46.812956 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:46.812976 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:49.345440 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:49.356029 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:49.356092 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:49.381514 1848358 cri.go:89] found id: ""
	I1216 02:57:49.381528 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.381535 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:49.381540 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:49.381608 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:49.411765 1848358 cri.go:89] found id: ""
	I1216 02:57:49.411779 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.411786 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:49.411791 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:49.411854 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:49.440610 1848358 cri.go:89] found id: ""
	I1216 02:57:49.440624 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.440631 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:49.440637 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:49.440705 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:49.470688 1848358 cri.go:89] found id: ""
	I1216 02:57:49.470702 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.470709 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:49.470714 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:49.470774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:49.497170 1848358 cri.go:89] found id: ""
	I1216 02:57:49.497184 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.497191 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:49.497196 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:49.497254 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:49.521925 1848358 cri.go:89] found id: ""
	I1216 02:57:49.521940 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.521947 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:49.521952 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:49.522011 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:49.546344 1848358 cri.go:89] found id: ""
	I1216 02:57:49.546358 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.546366 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:49.546374 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:49.546385 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:49.602407 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:49.602426 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:49.619246 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:49.619263 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:49.683476 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:49.674979   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.675736   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677328   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677797   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.679458   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:49.674979   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.675736   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677328   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677797   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.679458   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:49.683488 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:49.683499 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:49.752732 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:49.752753 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:52.289101 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:52.300210 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:52.300272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:52.327757 1848358 cri.go:89] found id: ""
	I1216 02:57:52.327772 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.327779 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:52.327784 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:52.327842 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:52.352750 1848358 cri.go:89] found id: ""
	I1216 02:57:52.352764 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.352771 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:52.352776 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:52.352834 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:52.377100 1848358 cri.go:89] found id: ""
	I1216 02:57:52.377114 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.377135 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:52.377140 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:52.377210 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:52.401376 1848358 cri.go:89] found id: ""
	I1216 02:57:52.401390 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.401397 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:52.401402 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:52.401462 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:52.428592 1848358 cri.go:89] found id: ""
	I1216 02:57:52.428606 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.428613 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:52.428618 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:52.428677 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:52.457192 1848358 cri.go:89] found id: ""
	I1216 02:57:52.457206 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.457213 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:52.457218 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:52.457276 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:52.481473 1848358 cri.go:89] found id: ""
	I1216 02:57:52.481494 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.481501 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:52.481509 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:52.481519 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:52.540087 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:52.540106 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:52.560374 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:52.560391 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:52.628219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:52.619222   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.619906   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.621689   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.622192   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.623773   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:52.619222   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.619906   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.621689   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.622192   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.623773   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:52.628231 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:52.628241 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:52.692110 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:52.692130 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:55.226607 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:55.236818 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:55.236879 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:55.265073 1848358 cri.go:89] found id: ""
	I1216 02:57:55.265087 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.265094 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:55.265099 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:55.265160 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:55.291262 1848358 cri.go:89] found id: ""
	I1216 02:57:55.291276 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.291284 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:55.291289 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:55.291357 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:55.320515 1848358 cri.go:89] found id: ""
	I1216 02:57:55.320539 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.320546 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:55.320551 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:55.320620 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:55.348402 1848358 cri.go:89] found id: ""
	I1216 02:57:55.348426 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.348433 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:55.348438 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:55.348500 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:55.373391 1848358 cri.go:89] found id: ""
	I1216 02:57:55.373405 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.373413 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:55.373418 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:55.373480 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:55.402098 1848358 cri.go:89] found id: ""
	I1216 02:57:55.402111 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.402118 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:55.402124 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:55.402183 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:55.427824 1848358 cri.go:89] found id: ""
	I1216 02:57:55.427838 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.427845 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:55.427853 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:55.427863 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:55.497187 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:55.497216 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:55.526960 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:55.526981 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:55.585085 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:55.585105 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:55.602223 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:55.602241 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:55.671427 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:55.662836   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.663796   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665445   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665748   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.667112   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:55.662836   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.663796   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665445   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665748   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.667112   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:58.171689 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:58.181822 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:58.181885 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:58.206129 1848358 cri.go:89] found id: ""
	I1216 02:57:58.206143 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.206150 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:58.206155 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:58.206214 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:58.230940 1848358 cri.go:89] found id: ""
	I1216 02:57:58.230954 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.230960 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:58.230966 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:58.231024 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:58.256698 1848358 cri.go:89] found id: ""
	I1216 02:57:58.256712 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.256720 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:58.256724 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:58.256788 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:58.281370 1848358 cri.go:89] found id: ""
	I1216 02:57:58.281385 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.281392 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:58.281396 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:58.281456 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:58.313032 1848358 cri.go:89] found id: ""
	I1216 02:57:58.313046 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.313054 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:58.313059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:58.313124 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:58.337968 1848358 cri.go:89] found id: ""
	I1216 02:57:58.337982 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.337989 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:58.337994 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:58.338052 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:58.367215 1848358 cri.go:89] found id: ""
	I1216 02:57:58.367231 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.367239 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:58.367247 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:58.367259 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:58.433078 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:58.423612   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.424320   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426139   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426759   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.428489   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:58.423612   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.424320   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426139   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426759   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.428489   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:58.433088 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:58.433099 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:58.496751 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:58.496771 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:58.528345 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:58.528362 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:58.585231 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:58.585249 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:01.103256 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:01.114505 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:01.114572 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:01.141817 1848358 cri.go:89] found id: ""
	I1216 02:58:01.141831 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.141838 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:01.141843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:01.141908 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:01.170638 1848358 cri.go:89] found id: ""
	I1216 02:58:01.170653 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.170660 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:01.170667 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:01.170733 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:01.197958 1848358 cri.go:89] found id: ""
	I1216 02:58:01.197973 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.197980 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:01.197986 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:01.198051 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:01.225715 1848358 cri.go:89] found id: ""
	I1216 02:58:01.225731 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.225738 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:01.225744 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:01.225803 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:01.256157 1848358 cri.go:89] found id: ""
	I1216 02:58:01.256171 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.256178 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:01.256184 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:01.256244 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:01.281610 1848358 cri.go:89] found id: ""
	I1216 02:58:01.281625 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.281633 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:01.281638 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:01.281702 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:01.306348 1848358 cri.go:89] found id: ""
	I1216 02:58:01.306363 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.306370 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:01.306377 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:01.306388 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:01.335207 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:01.335224 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:01.392222 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:01.392242 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:01.408874 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:01.408890 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:01.472601 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:01.464071   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.464670   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.466724   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.467464   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.468593   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:01.464071   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.464670   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.466724   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.467464   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.468593   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:01.472613 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:01.472626 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:04.035738 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:04.046578 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:04.046661 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:04.072441 1848358 cri.go:89] found id: ""
	I1216 02:58:04.072456 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.072463 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:04.072468 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:04.072531 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:04.103113 1848358 cri.go:89] found id: ""
	I1216 02:58:04.103128 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.103135 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:04.103139 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:04.103208 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:04.127981 1848358 cri.go:89] found id: ""
	I1216 02:58:04.127995 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.128002 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:04.128007 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:04.128067 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:04.153050 1848358 cri.go:89] found id: ""
	I1216 02:58:04.153065 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.153072 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:04.153077 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:04.153139 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:04.176840 1848358 cri.go:89] found id: ""
	I1216 02:58:04.176854 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.176879 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:04.176885 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:04.176954 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:04.205747 1848358 cri.go:89] found id: ""
	I1216 02:58:04.205771 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.205779 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:04.205784 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:04.205853 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:04.234453 1848358 cri.go:89] found id: ""
	I1216 02:58:04.234467 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.234474 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:04.234483 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:04.234505 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:04.294713 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:04.294732 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:04.312011 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:04.312029 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:04.378295 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:04.369406   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.370236   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372024   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372637   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.374434   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:04.369406   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.370236   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372024   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372637   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.374434   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:04.378314 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:04.378325 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:04.440962 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:04.440984 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:06.970088 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:06.983751 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:06.983819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:07.013657 1848358 cri.go:89] found id: ""
	I1216 02:58:07.013672 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.013679 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:07.013684 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:07.013752 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:07.038882 1848358 cri.go:89] found id: ""
	I1216 02:58:07.038896 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.038904 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:07.038909 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:07.038968 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:07.064215 1848358 cri.go:89] found id: ""
	I1216 02:58:07.064230 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.064237 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:07.064242 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:07.064304 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:07.088144 1848358 cri.go:89] found id: ""
	I1216 02:58:07.088158 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.088165 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:07.088170 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:07.088229 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:07.112044 1848358 cri.go:89] found id: ""
	I1216 02:58:07.112059 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.112066 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:07.112071 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:07.112137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:07.138570 1848358 cri.go:89] found id: ""
	I1216 02:58:07.138586 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.138593 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:07.138599 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:07.138658 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:07.166931 1848358 cri.go:89] found id: ""
	I1216 02:58:07.166945 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.166952 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:07.166959 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:07.166973 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:07.197292 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:07.197308 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:07.255003 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:07.255023 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:07.273531 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:07.273547 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:07.338842 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:07.330204   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.330976   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.332722   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.333328   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.334951   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:07.330204   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.330976   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.332722   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.333328   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.334951   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:07.338852 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:07.338863 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:09.902725 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:09.913150 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:09.913213 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:09.946614 1848358 cri.go:89] found id: ""
	I1216 02:58:09.946627 1848358 logs.go:282] 0 containers: []
	W1216 02:58:09.946634 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:09.946639 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:09.946703 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:09.975470 1848358 cri.go:89] found id: ""
	I1216 02:58:09.975484 1848358 logs.go:282] 0 containers: []
	W1216 02:58:09.975491 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:09.975496 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:09.975557 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:10.002745 1848358 cri.go:89] found id: ""
	I1216 02:58:10.002773 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.002782 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:10.002787 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:10.002866 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:10.035489 1848358 cri.go:89] found id: ""
	I1216 02:58:10.035504 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.035512 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:10.035517 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:10.035581 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:10.062019 1848358 cri.go:89] found id: ""
	I1216 02:58:10.062044 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.062052 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:10.062059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:10.062139 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:10.088952 1848358 cri.go:89] found id: ""
	I1216 02:58:10.088977 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.088986 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:10.088991 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:10.089061 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:10.115714 1848358 cri.go:89] found id: ""
	I1216 02:58:10.115736 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.115744 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:10.115752 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:10.115762 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:10.172504 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:10.172524 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:10.190804 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:10.190821 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:10.258662 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:10.249875   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.250514   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252114   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252638   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.254176   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:10.249875   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.250514   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252114   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252638   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.254176   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:10.258675 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:10.258686 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:10.321543 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:10.321562 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:12.849334 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:12.859284 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:12.859345 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:12.884624 1848358 cri.go:89] found id: ""
	I1216 02:58:12.884640 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.884648 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:12.884653 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:12.884722 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:12.908735 1848358 cri.go:89] found id: ""
	I1216 02:58:12.908749 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.908756 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:12.908761 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:12.908819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:12.944827 1848358 cri.go:89] found id: ""
	I1216 02:58:12.944841 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.944848 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:12.944854 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:12.944917 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:12.974281 1848358 cri.go:89] found id: ""
	I1216 02:58:12.974295 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.974302 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:12.974308 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:12.974367 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:13.008278 1848358 cri.go:89] found id: ""
	I1216 02:58:13.008294 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.008302 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:13.008307 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:13.008376 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:13.034272 1848358 cri.go:89] found id: ""
	I1216 02:58:13.034286 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.034294 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:13.034299 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:13.034361 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:13.064663 1848358 cri.go:89] found id: ""
	I1216 02:58:13.064688 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.064695 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:13.064703 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:13.064716 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:13.127826 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:13.127848 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:13.158482 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:13.158498 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:13.218053 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:13.218072 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:13.234830 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:13.234846 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:13.298317 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:13.289893   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.290910   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.291765   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293309   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293580   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:13.289893   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.290910   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.291765   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293309   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293580   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:15.798590 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:15.809144 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:15.809225 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:15.834683 1848358 cri.go:89] found id: ""
	I1216 02:58:15.834696 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.834704 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:15.834709 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:15.834774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:15.860001 1848358 cri.go:89] found id: ""
	I1216 02:58:15.860030 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.860038 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:15.860042 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:15.860113 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:15.884488 1848358 cri.go:89] found id: ""
	I1216 02:58:15.884503 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.884510 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:15.884515 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:15.884572 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:15.908030 1848358 cri.go:89] found id: ""
	I1216 02:58:15.908045 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.908051 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:15.908056 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:15.908116 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:15.932641 1848358 cri.go:89] found id: ""
	I1216 02:58:15.932654 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.932661 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:15.932666 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:15.932723 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:15.962741 1848358 cri.go:89] found id: ""
	I1216 02:58:15.962754 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.962772 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:15.962779 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:15.962836 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:15.990774 1848358 cri.go:89] found id: ""
	I1216 02:58:15.990788 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.990806 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:15.990829 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:15.990838 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:16.067729 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:16.067748 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:16.098615 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:16.098635 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:16.154944 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:16.154963 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:16.172510 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:16.172527 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:16.237380 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:16.229269   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.229868   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231379   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231950   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.233594   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:16.229269   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.229868   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231379   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231950   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.233594   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:18.738100 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:18.751636 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:18.751717 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:18.779608 1848358 cri.go:89] found id: ""
	I1216 02:58:18.779622 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.779629 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:18.779634 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:18.779693 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:18.805721 1848358 cri.go:89] found id: ""
	I1216 02:58:18.805735 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.805742 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:18.805747 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:18.805812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:18.831187 1848358 cri.go:89] found id: ""
	I1216 02:58:18.831203 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.831210 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:18.831215 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:18.831280 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:18.857343 1848358 cri.go:89] found id: ""
	I1216 02:58:18.857367 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.857375 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:18.857380 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:18.857448 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:18.882737 1848358 cri.go:89] found id: ""
	I1216 02:58:18.882751 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.882758 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:18.882765 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:18.882834 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:18.907486 1848358 cri.go:89] found id: ""
	I1216 02:58:18.907500 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.907508 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:18.907513 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:18.907573 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:18.939361 1848358 cri.go:89] found id: ""
	I1216 02:58:18.939375 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.939382 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:18.939390 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:18.939401 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:19.019241 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:19.010485   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.010907   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.012525   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.013150   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.014918   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:19.010485   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.010907   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.012525   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.013150   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.014918   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:19.019251 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:19.019262 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:19.081820 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:19.081842 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:19.110025 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:19.110042 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:19.166216 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:19.166236 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:21.684597 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:21.694910 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:21.694974 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:21.719581 1848358 cri.go:89] found id: ""
	I1216 02:58:21.719595 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.719602 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:21.719607 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:21.719670 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:21.745661 1848358 cri.go:89] found id: ""
	I1216 02:58:21.745675 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.745682 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:21.745688 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:21.745745 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:21.770329 1848358 cri.go:89] found id: ""
	I1216 02:58:21.770342 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.770349 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:21.770354 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:21.770425 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:21.795402 1848358 cri.go:89] found id: ""
	I1216 02:58:21.795416 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.795423 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:21.795434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:21.795492 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:21.821959 1848358 cri.go:89] found id: ""
	I1216 02:58:21.821972 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.821979 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:21.821984 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:21.822043 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:21.845121 1848358 cri.go:89] found id: ""
	I1216 02:58:21.845135 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.845142 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:21.845148 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:21.845209 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:21.868958 1848358 cri.go:89] found id: ""
	I1216 02:58:21.868972 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.868979 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:21.868987 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:21.868997 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:21.932460 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:21.924049   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.924916   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926515   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926825   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.928346   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:21.924049   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.924916   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926515   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926825   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.928346   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:21.932490 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:21.932502 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:22.006384 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:22.006415 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:22.040639 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:22.040655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:22.097981 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:22.098000 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:24.615636 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:24.626423 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:24.626486 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:24.650890 1848358 cri.go:89] found id: ""
	I1216 02:58:24.650904 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.650911 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:24.650916 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:24.650984 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:24.676132 1848358 cri.go:89] found id: ""
	I1216 02:58:24.676146 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.676153 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:24.676158 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:24.676219 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:24.705732 1848358 cri.go:89] found id: ""
	I1216 02:58:24.705746 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.705753 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:24.705758 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:24.705820 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:24.729899 1848358 cri.go:89] found id: ""
	I1216 02:58:24.729914 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.729922 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:24.729927 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:24.729988 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:24.760724 1848358 cri.go:89] found id: ""
	I1216 02:58:24.760744 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.760752 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:24.760756 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:24.760821 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:24.789128 1848358 cri.go:89] found id: ""
	I1216 02:58:24.789144 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.789151 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:24.789157 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:24.789221 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:24.814525 1848358 cri.go:89] found id: ""
	I1216 02:58:24.814539 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.814548 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:24.814555 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:24.814567 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:24.845234 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:24.845251 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:24.904816 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:24.904835 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:24.922721 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:24.922744 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:25.017286 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:25.006539   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.007431   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.009403   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.010041   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.013452   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:25.006539   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.007431   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.009403   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.010041   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.013452   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:25.017298 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:25.017309 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:27.580148 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:27.590499 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:27.590563 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:27.614749 1848358 cri.go:89] found id: ""
	I1216 02:58:27.614764 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.614771 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:27.614776 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:27.614835 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:27.638735 1848358 cri.go:89] found id: ""
	I1216 02:58:27.638749 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.638756 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:27.638762 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:27.638821 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:27.665480 1848358 cri.go:89] found id: ""
	I1216 02:58:27.665495 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.665503 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:27.665508 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:27.665565 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:27.695981 1848358 cri.go:89] found id: ""
	I1216 02:58:27.695996 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.696004 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:27.696009 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:27.696088 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:27.720368 1848358 cri.go:89] found id: ""
	I1216 02:58:27.720390 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.720397 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:27.720403 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:27.720469 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:27.746357 1848358 cri.go:89] found id: ""
	I1216 02:58:27.746371 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.746377 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:27.746383 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:27.746441 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:27.770684 1848358 cri.go:89] found id: ""
	I1216 02:58:27.770708 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.770716 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:27.770724 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:27.770734 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:27.836245 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:27.836265 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:27.865946 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:27.865964 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:27.924653 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:27.924675 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:27.945999 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:27.946015 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:28.027275 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:28.018924   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.019507   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021144   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021655   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.023260   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:28.018924   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.019507   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021144   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021655   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.023260   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:30.527490 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:30.537746 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:30.537811 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:30.562783 1848358 cri.go:89] found id: ""
	I1216 02:58:30.562797 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.562805 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:30.562810 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:30.562882 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:30.587495 1848358 cri.go:89] found id: ""
	I1216 02:58:30.587509 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.587515 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:30.587521 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:30.587583 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:30.611375 1848358 cri.go:89] found id: ""
	I1216 02:58:30.611392 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.611400 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:30.611406 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:30.611472 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:30.635442 1848358 cri.go:89] found id: ""
	I1216 02:58:30.635457 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.635464 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:30.635469 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:30.635527 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:30.659725 1848358 cri.go:89] found id: ""
	I1216 02:58:30.659745 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.659752 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:30.659757 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:30.659819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:30.683639 1848358 cri.go:89] found id: ""
	I1216 02:58:30.683654 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.683661 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:30.683666 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:30.683725 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:30.709231 1848358 cri.go:89] found id: ""
	I1216 02:58:30.709246 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.709252 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:30.709260 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:30.709271 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:30.765116 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:30.765136 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:30.782213 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:30.782230 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:30.843173 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:30.835436   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.836096   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837188   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837822   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.839482   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:30.835436   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.836096   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837188   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837822   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.839482   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:30.843184 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:30.843195 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:30.905457 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:30.905477 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:33.448949 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:33.458942 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:33.459006 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:33.494559 1848358 cri.go:89] found id: ""
	I1216 02:58:33.494573 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.494582 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:33.494602 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:33.494672 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:33.521008 1848358 cri.go:89] found id: ""
	I1216 02:58:33.521028 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.521036 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:33.521041 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:33.521103 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:33.545598 1848358 cri.go:89] found id: ""
	I1216 02:58:33.545613 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.545620 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:33.545625 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:33.545684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:33.573194 1848358 cri.go:89] found id: ""
	I1216 02:58:33.573207 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.573214 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:33.573219 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:33.573284 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:33.597747 1848358 cri.go:89] found id: ""
	I1216 02:58:33.597761 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.597784 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:33.597789 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:33.597859 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:33.621788 1848358 cri.go:89] found id: ""
	I1216 02:58:33.621803 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.621810 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:33.621815 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:33.621892 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:33.646528 1848358 cri.go:89] found id: ""
	I1216 02:58:33.646543 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.646550 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:33.646557 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:33.646567 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:33.708165 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:33.708187 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:33.736001 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:33.736018 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:33.791763 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:33.791786 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:33.808896 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:33.808912 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:33.876753 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:33.868694   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.869434   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871119   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871558   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.873040   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:33.868694   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.869434   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871119   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871558   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.873040   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:36.376982 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:36.386962 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:36.387033 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:36.410927 1848358 cri.go:89] found id: ""
	I1216 02:58:36.410941 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.410948 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:36.410954 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:36.411013 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:36.436158 1848358 cri.go:89] found id: ""
	I1216 02:58:36.436171 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.436179 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:36.436189 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:36.436260 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:36.460716 1848358 cri.go:89] found id: ""
	I1216 02:58:36.460730 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.460737 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:36.460743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:36.460815 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:36.485244 1848358 cri.go:89] found id: ""
	I1216 02:58:36.485258 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.485266 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:36.485272 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:36.485335 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:36.509347 1848358 cri.go:89] found id: ""
	I1216 02:58:36.509361 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.509368 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:36.509374 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:36.509434 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:36.534352 1848358 cri.go:89] found id: ""
	I1216 02:58:36.534367 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.534374 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:36.534419 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:36.534481 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:36.560075 1848358 cri.go:89] found id: ""
	I1216 02:58:36.560090 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.560097 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:36.560105 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:36.560116 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:36.618652 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:36.618670 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:36.635627 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:36.635643 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:36.704527 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:36.695687   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.696277   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698043   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698600   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.700190   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:36.695687   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.696277   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698043   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698600   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.700190   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:36.704537 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:36.704550 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:36.767179 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:36.767199 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:39.295686 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:39.305848 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:39.305909 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:39.329771 1848358 cri.go:89] found id: ""
	I1216 02:58:39.329785 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.329792 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:39.329797 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:39.329857 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:39.354814 1848358 cri.go:89] found id: ""
	I1216 02:58:39.354829 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.354836 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:39.354841 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:39.354900 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:39.380095 1848358 cri.go:89] found id: ""
	I1216 02:58:39.380110 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.380117 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:39.380122 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:39.380182 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:39.404438 1848358 cri.go:89] found id: ""
	I1216 02:58:39.404453 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.404460 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:39.404465 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:39.404526 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:39.432615 1848358 cri.go:89] found id: ""
	I1216 02:58:39.432630 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.432636 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:39.432644 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:39.432709 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:39.456879 1848358 cri.go:89] found id: ""
	I1216 02:58:39.456893 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.456900 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:39.456905 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:39.456966 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:39.481400 1848358 cri.go:89] found id: ""
	I1216 02:58:39.481415 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.481421 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:39.481430 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:39.481441 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:39.540413 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:39.540433 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:39.558600 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:39.558618 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:39.623191 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:39.614791   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.615619   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617365   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617686   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.619229   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:39.614791   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.615619   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617365   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617686   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.619229   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:39.623201 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:39.623212 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:39.685663 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:39.685683 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:42.212532 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:42.242820 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:42.242893 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:42.277407 1848358 cri.go:89] found id: ""
	I1216 02:58:42.277427 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.277435 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:42.277441 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:42.277513 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:42.313862 1848358 cri.go:89] found id: ""
	I1216 02:58:42.313877 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.313893 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:42.313898 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:42.313963 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:42.345979 1848358 cri.go:89] found id: ""
	I1216 02:58:42.345995 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.346003 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:42.346009 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:42.346075 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:42.372530 1848358 cri.go:89] found id: ""
	I1216 02:58:42.372545 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.372552 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:42.372558 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:42.372622 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:42.400807 1848358 cri.go:89] found id: ""
	I1216 02:58:42.400821 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.400829 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:42.400834 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:42.400901 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:42.426053 1848358 cri.go:89] found id: ""
	I1216 02:58:42.426067 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.426074 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:42.426079 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:42.426137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:42.453460 1848358 cri.go:89] found id: ""
	I1216 02:58:42.453475 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.453482 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:42.453490 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:42.453500 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:42.509219 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:42.509237 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:42.526995 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:42.527011 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:42.589697 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:42.581361   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.581873   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.583507   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.584135   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.585772   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:42.581361   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.581873   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.583507   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.584135   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.585772   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:42.589706 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:42.589723 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:42.655306 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:42.655326 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:45.183328 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:45.217035 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:45.217117 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:45.257225 1848358 cri.go:89] found id: ""
	I1216 02:58:45.257247 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.257258 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:45.257264 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:45.257334 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:45.304389 1848358 cri.go:89] found id: ""
	I1216 02:58:45.304407 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.304416 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:45.304423 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:45.304509 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:45.334339 1848358 cri.go:89] found id: ""
	I1216 02:58:45.334354 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.334362 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:45.334367 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:45.334435 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:45.360176 1848358 cri.go:89] found id: ""
	I1216 02:58:45.360190 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.360198 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:45.360203 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:45.360263 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:45.384648 1848358 cri.go:89] found id: ""
	I1216 02:58:45.384663 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.384669 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:45.384678 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:45.384738 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:45.411115 1848358 cri.go:89] found id: ""
	I1216 02:58:45.411131 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.411138 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:45.411144 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:45.411218 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:45.437746 1848358 cri.go:89] found id: ""
	I1216 02:58:45.437761 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.437768 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:45.437776 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:45.437797 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:45.500791 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:45.500811 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:45.530882 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:45.530899 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:45.588591 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:45.588609 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:45.605872 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:45.605900 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:45.673187 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:45.663658   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.664990   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.665900   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.667592   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.668146   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:45.663658   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.664990   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.665900   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.667592   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.668146   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:48.173453 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:48.186360 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:48.186425 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:48.216541 1848358 cri.go:89] found id: ""
	I1216 02:58:48.216556 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.216563 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:48.216568 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:48.216633 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:48.243385 1848358 cri.go:89] found id: ""
	I1216 02:58:48.243399 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.243407 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:48.243412 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:48.243473 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:48.268738 1848358 cri.go:89] found id: ""
	I1216 02:58:48.268752 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.268759 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:48.268764 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:48.268825 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:48.293634 1848358 cri.go:89] found id: ""
	I1216 02:58:48.293649 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.293657 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:48.293662 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:48.293722 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:48.320780 1848358 cri.go:89] found id: ""
	I1216 02:58:48.320796 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.320805 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:48.320810 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:48.320872 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:48.344687 1848358 cri.go:89] found id: ""
	I1216 02:58:48.344701 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.344710 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:48.344715 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:48.344775 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:48.368368 1848358 cri.go:89] found id: ""
	I1216 02:58:48.368383 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.368390 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:48.368398 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:48.368407 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:48.424495 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:48.424515 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:48.441644 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:48.441660 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:48.506701 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:48.498238   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.498920   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.500649   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.501232   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.502941   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:48.498238   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.498920   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.500649   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.501232   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.502941   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:48.506710 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:48.506721 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:48.569962 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:48.569984 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:51.098190 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:51.108977 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:51.109048 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:51.134223 1848358 cri.go:89] found id: ""
	I1216 02:58:51.134237 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.134244 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:51.134249 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:51.134310 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:51.161239 1848358 cri.go:89] found id: ""
	I1216 02:58:51.161253 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.161261 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:51.161266 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:51.161326 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:51.202211 1848358 cri.go:89] found id: ""
	I1216 02:58:51.202225 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.202232 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:51.202237 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:51.202296 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:51.233630 1848358 cri.go:89] found id: ""
	I1216 02:58:51.233651 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.233658 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:51.233663 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:51.233728 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:51.270204 1848358 cri.go:89] found id: ""
	I1216 02:58:51.270219 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.270233 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:51.270238 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:51.270301 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:51.298689 1848358 cri.go:89] found id: ""
	I1216 02:58:51.298705 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.298716 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:51.298722 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:51.298799 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:51.323107 1848358 cri.go:89] found id: ""
	I1216 02:58:51.323126 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.323133 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:51.323140 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:51.323150 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:51.386665 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:51.386693 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:51.404372 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:51.404391 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:51.469512 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:51.460793   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.461262   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.462922   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.463490   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.465130   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:51.460793   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.461262   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.462922   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.463490   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.465130   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:51.469532 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:51.469554 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:51.535704 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:51.535725 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:54.065223 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:54.077244 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:54.077307 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:54.106090 1848358 cri.go:89] found id: ""
	I1216 02:58:54.106103 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.106110 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:54.106115 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:54.106177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:54.131805 1848358 cri.go:89] found id: ""
	I1216 02:58:54.131819 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.131833 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:54.131838 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:54.131899 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:54.156816 1848358 cri.go:89] found id: ""
	I1216 02:58:54.156829 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.156837 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:54.156842 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:54.156901 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:54.181654 1848358 cri.go:89] found id: ""
	I1216 02:58:54.181669 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.181693 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:54.181698 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:54.181765 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:54.219797 1848358 cri.go:89] found id: ""
	I1216 02:58:54.219812 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.219819 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:54.219833 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:54.219910 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:54.251176 1848358 cri.go:89] found id: ""
	I1216 02:58:54.251190 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.251197 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:54.251202 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:54.251265 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:54.275716 1848358 cri.go:89] found id: ""
	I1216 02:58:54.275731 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.275739 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:54.275747 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:54.275758 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:54.338395 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:54.330425   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.330860   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332372   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332678   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.334183   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:54.330425   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.330860   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332372   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332678   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.334183   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:54.338408 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:54.338429 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:54.401729 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:54.401749 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:54.429361 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:54.429376 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:54.489525 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:54.489545 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:57.006993 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:57.017732 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:57.017792 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:57.042221 1848358 cri.go:89] found id: ""
	I1216 02:58:57.042235 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.042242 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:57.042248 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:57.042316 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:57.069364 1848358 cri.go:89] found id: ""
	I1216 02:58:57.069378 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.069385 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:57.069390 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:57.069450 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:57.093795 1848358 cri.go:89] found id: ""
	I1216 02:58:57.093808 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.093815 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:57.093820 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:57.093881 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:57.118148 1848358 cri.go:89] found id: ""
	I1216 02:58:57.118161 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.118168 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:57.118177 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:57.118235 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:57.142161 1848358 cri.go:89] found id: ""
	I1216 02:58:57.142175 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.142182 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:57.142187 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:57.142247 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:57.169165 1848358 cri.go:89] found id: ""
	I1216 02:58:57.169178 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.169186 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:57.169191 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:57.169256 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:57.200840 1848358 cri.go:89] found id: ""
	I1216 02:58:57.200855 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.200862 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:57.200870 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:57.200881 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:57.260426 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:57.260444 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:57.285637 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:57.285654 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:57.350704 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:57.342168   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.342999   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.344857   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.345195   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.346755   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:57.342168   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.342999   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.344857   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.345195   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.346755   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:57.350714 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:57.350727 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:57.413587 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:57.413606 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:59.944007 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:59.954621 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:59.954685 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:59.979450 1848358 cri.go:89] found id: ""
	I1216 02:58:59.979466 1848358 logs.go:282] 0 containers: []
	W1216 02:58:59.979474 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:59.979479 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:59.979543 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:00.040218 1848358 cri.go:89] found id: ""
	I1216 02:59:00.040237 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.040245 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:00.040251 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:00.040325 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:00.225643 1848358 cri.go:89] found id: ""
	I1216 02:59:00.225659 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.225666 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:00.225679 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:00.225749 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:00.292916 1848358 cri.go:89] found id: ""
	I1216 02:59:00.292933 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.292941 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:00.292947 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:00.293016 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:00.327359 1848358 cri.go:89] found id: ""
	I1216 02:59:00.327375 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.327383 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:00.327389 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:00.327463 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:00.362091 1848358 cri.go:89] found id: ""
	I1216 02:59:00.362107 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.362116 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:00.362121 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:00.362205 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:00.392615 1848358 cri.go:89] found id: ""
	I1216 02:59:00.392648 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.392656 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:00.392665 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:00.392677 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:00.411628 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:00.411646 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:00.485425 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:00.476589   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.477159   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.478750   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.479401   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.480376   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:00.476589   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.477159   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.478750   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.479401   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.480376   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:00.485435 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:00.485446 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:00.548759 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:00.548779 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:00.579219 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:00.579235 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:03.138643 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:03.151350 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:03.151414 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:03.177456 1848358 cri.go:89] found id: ""
	I1216 02:59:03.177480 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.177489 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:03.177494 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:03.177576 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:03.209025 1848358 cri.go:89] found id: ""
	I1216 02:59:03.209054 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.209063 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:03.209068 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:03.209142 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:03.245557 1848358 cri.go:89] found id: ""
	I1216 02:59:03.245571 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.245578 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:03.245583 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:03.245651 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:03.273887 1848358 cri.go:89] found id: ""
	I1216 02:59:03.273902 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.273909 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:03.273914 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:03.273980 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:03.299955 1848358 cri.go:89] found id: ""
	I1216 02:59:03.299970 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.299977 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:03.299987 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:03.300050 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:03.325891 1848358 cri.go:89] found id: ""
	I1216 02:59:03.325906 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.325913 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:03.325918 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:03.325977 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:03.353059 1848358 cri.go:89] found id: ""
	I1216 02:59:03.353073 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.353080 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:03.353088 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:03.353101 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:03.409018 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:03.409040 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:03.427124 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:03.427141 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:03.498219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:03.489642   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.490076   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.491637   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.492014   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.493527   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:03.489642   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.490076   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.491637   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.492014   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.493527   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:03.498236 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:03.498250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:03.563005 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:03.563031 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:06.091678 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:06.102426 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:06.102489 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:06.127426 1848358 cri.go:89] found id: ""
	I1216 02:59:06.127439 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.127446 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:06.127452 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:06.127511 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:06.152255 1848358 cri.go:89] found id: ""
	I1216 02:59:06.152270 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.152277 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:06.152282 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:06.152344 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:06.181806 1848358 cri.go:89] found id: ""
	I1216 02:59:06.181832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.181840 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:06.181846 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:06.181909 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:06.211543 1848358 cri.go:89] found id: ""
	I1216 02:59:06.211558 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.211565 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:06.211576 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:06.211638 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:06.239433 1848358 cri.go:89] found id: ""
	I1216 02:59:06.239448 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.239454 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:06.239460 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:06.239521 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:06.265180 1848358 cri.go:89] found id: ""
	I1216 02:59:06.265199 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.265206 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:06.265212 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:06.265273 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:06.288594 1848358 cri.go:89] found id: ""
	I1216 02:59:06.288608 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.288615 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:06.288622 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:06.288633 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:06.347416 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:06.347440 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:06.365120 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:06.365137 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:06.429753 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:06.422151   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.422683   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424172   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424494   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.425949   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:06.422151   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.422683   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424172   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424494   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.425949   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:06.429762 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:06.429772 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:06.491187 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:06.491205 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:09.021976 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:09.032138 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:09.032199 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:09.056495 1848358 cri.go:89] found id: ""
	I1216 02:59:09.056509 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.056517 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:09.056522 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:09.056579 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:09.085249 1848358 cri.go:89] found id: ""
	I1216 02:59:09.085263 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.085269 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:09.085275 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:09.085336 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:09.109270 1848358 cri.go:89] found id: ""
	I1216 02:59:09.109284 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.109291 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:09.109296 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:09.109365 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:09.134217 1848358 cri.go:89] found id: ""
	I1216 02:59:09.134231 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.134238 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:09.134243 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:09.134305 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:09.158656 1848358 cri.go:89] found id: ""
	I1216 02:59:09.158670 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.158677 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:09.158682 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:09.158749 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:09.190922 1848358 cri.go:89] found id: ""
	I1216 02:59:09.190937 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.190944 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:09.190949 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:09.191020 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:09.231605 1848358 cri.go:89] found id: ""
	I1216 02:59:09.231619 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.231633 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:09.231642 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:09.231652 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:09.293613 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:09.293633 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:09.310949 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:09.310966 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:09.378806 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:09.369691   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.370608   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372360   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372849   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.374328   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:09.369691   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.370608   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372360   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372849   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.374328   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:09.378816 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:09.378827 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:09.440510 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:09.440528 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:11.972007 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:11.982340 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:11.982402 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:12.014868 1848358 cri.go:89] found id: ""
	I1216 02:59:12.014883 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.014890 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:12.014895 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:12.014969 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:12.040987 1848358 cri.go:89] found id: ""
	I1216 02:59:12.041002 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.041008 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:12.041013 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:12.041090 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:12.065526 1848358 cri.go:89] found id: ""
	I1216 02:59:12.065540 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.065561 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:12.065566 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:12.065635 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:12.093806 1848358 cri.go:89] found id: ""
	I1216 02:59:12.093833 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.093841 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:12.093849 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:12.093921 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:12.121567 1848358 cri.go:89] found id: ""
	I1216 02:59:12.121595 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.121602 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:12.121607 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:12.121677 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:12.144869 1848358 cri.go:89] found id: ""
	I1216 02:59:12.144883 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.144890 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:12.144895 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:12.144955 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:12.168723 1848358 cri.go:89] found id: ""
	I1216 02:59:12.168737 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.168744 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:12.168752 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:12.168769 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:12.185531 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:12.185547 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:12.264487 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:12.255585   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.256344   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258045   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258783   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.260488   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:12.255585   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.256344   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258045   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258783   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.260488   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:12.264497 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:12.264508 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:12.326049 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:12.326068 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:12.353200 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:12.353216 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:14.910970 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:14.924577 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:14.924643 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:14.953399 1848358 cri.go:89] found id: ""
	I1216 02:59:14.953413 1848358 logs.go:282] 0 containers: []
	W1216 02:59:14.953420 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:14.953432 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:14.953495 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:14.978792 1848358 cri.go:89] found id: ""
	I1216 02:59:14.978806 1848358 logs.go:282] 0 containers: []
	W1216 02:59:14.978815 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:14.978821 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:14.978880 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:15.008511 1848358 cri.go:89] found id: ""
	I1216 02:59:15.008528 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.008536 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:15.008542 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:15.008624 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:15.053197 1848358 cri.go:89] found id: ""
	I1216 02:59:15.053213 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.053220 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:15.053226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:15.053293 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:15.082542 1848358 cri.go:89] found id: ""
	I1216 02:59:15.082557 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.082564 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:15.082570 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:15.082634 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:15.109527 1848358 cri.go:89] found id: ""
	I1216 02:59:15.109542 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.109550 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:15.109556 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:15.109634 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:15.137809 1848358 cri.go:89] found id: ""
	I1216 02:59:15.137823 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.137830 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:15.137838 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:15.137849 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:15.211501 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:15.202592   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.203475   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205236   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205549   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.207125   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:15.202592   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.203475   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205236   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205549   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.207125   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:15.211511 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:15.211523 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:15.285555 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:15.285576 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:15.314442 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:15.314458 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:15.370796 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:15.370818 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:17.889239 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:17.899171 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:17.899236 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:17.924099 1848358 cri.go:89] found id: ""
	I1216 02:59:17.924113 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.924121 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:17.924126 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:17.924187 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:17.950817 1848358 cri.go:89] found id: ""
	I1216 02:59:17.950832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.950838 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:17.950843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:17.950903 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:17.976899 1848358 cri.go:89] found id: ""
	I1216 02:59:17.976913 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.976920 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:17.976925 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:17.976987 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:18.003139 1848358 cri.go:89] found id: ""
	I1216 02:59:18.003156 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.003164 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:18.003169 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:18.003244 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:18.032644 1848358 cri.go:89] found id: ""
	I1216 02:59:18.032659 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.032666 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:18.032671 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:18.032740 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:18.058880 1848358 cri.go:89] found id: ""
	I1216 02:59:18.058895 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.058906 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:18.058915 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:18.058988 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:18.084275 1848358 cri.go:89] found id: ""
	I1216 02:59:18.084290 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.084298 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:18.084306 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:18.084318 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:18.146637 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:18.146665 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:18.164002 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:18.164022 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:18.241086 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:18.231204   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.232053   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.233635   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.234184   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.235838   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:18.231204   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.232053   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.233635   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.234184   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.235838   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:18.241097 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:18.241110 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:18.306777 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:18.306796 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:20.840754 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:20.850885 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:20.850942 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:20.880985 1848358 cri.go:89] found id: ""
	I1216 02:59:20.881000 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.881007 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:20.881012 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:20.881071 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:20.904789 1848358 cri.go:89] found id: ""
	I1216 02:59:20.904803 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.904810 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:20.904815 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:20.904873 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:20.929350 1848358 cri.go:89] found id: ""
	I1216 02:59:20.929362 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.929370 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:20.929381 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:20.929438 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:20.953473 1848358 cri.go:89] found id: ""
	I1216 02:59:20.953487 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.953493 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:20.953499 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:20.953558 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:20.977718 1848358 cri.go:89] found id: ""
	I1216 02:59:20.977731 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.977738 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:20.977743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:20.977800 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:21.001640 1848358 cri.go:89] found id: ""
	I1216 02:59:21.001657 1848358 logs.go:282] 0 containers: []
	W1216 02:59:21.001664 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:21.001669 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:21.001752 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:21.030827 1848358 cri.go:89] found id: ""
	I1216 02:59:21.030840 1848358 logs.go:282] 0 containers: []
	W1216 02:59:21.030847 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:21.030855 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:21.030865 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:21.086683 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:21.086703 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:21.106615 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:21.106638 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:21.196393 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:21.180051   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.181461   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.182376   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186322   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186918   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:21.180051   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.181461   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.182376   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186322   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186918   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:21.196410 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:21.196420 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:21.259711 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:21.259730 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:23.788985 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:23.801081 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:23.801153 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:23.831711 1848358 cri.go:89] found id: ""
	I1216 02:59:23.831732 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.831740 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:23.831745 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:23.831812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:23.857025 1848358 cri.go:89] found id: ""
	I1216 02:59:23.857040 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.857047 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:23.857052 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:23.857115 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:23.885653 1848358 cri.go:89] found id: ""
	I1216 02:59:23.885667 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.885674 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:23.885679 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:23.885739 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:23.912974 1848358 cri.go:89] found id: ""
	I1216 02:59:23.912987 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.912996 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:23.913001 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:23.913062 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:23.936892 1848358 cri.go:89] found id: ""
	I1216 02:59:23.936906 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.936914 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:23.936919 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:23.936978 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:23.959826 1848358 cri.go:89] found id: ""
	I1216 02:59:23.959841 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.959848 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:23.959853 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:23.959912 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:23.987747 1848358 cri.go:89] found id: ""
	I1216 02:59:23.987760 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.987767 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:23.987775 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:23.987785 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:24.043435 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:24.043453 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:24.060830 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:24.060848 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:24.129870 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:24.121071   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.121882   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.123511   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.124023   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.125643   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:24.121071   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.121882   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.123511   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.124023   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.125643   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:24.129882 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:24.129893 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:24.192043 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:24.192064 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:26.722933 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:26.733462 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:26.733528 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:26.757094 1848358 cri.go:89] found id: ""
	I1216 02:59:26.757109 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.757115 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:26.757121 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:26.757190 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:26.785265 1848358 cri.go:89] found id: ""
	I1216 02:59:26.785279 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.785286 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:26.785291 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:26.785348 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:26.809734 1848358 cri.go:89] found id: ""
	I1216 02:59:26.809748 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.809755 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:26.809760 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:26.809823 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:26.833900 1848358 cri.go:89] found id: ""
	I1216 02:59:26.833914 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.833921 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:26.833926 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:26.833983 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:26.858364 1848358 cri.go:89] found id: ""
	I1216 02:59:26.858381 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.858388 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:26.858392 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:26.858476 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:26.884221 1848358 cri.go:89] found id: ""
	I1216 02:59:26.884235 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.884242 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:26.884247 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:26.884306 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:26.909747 1848358 cri.go:89] found id: ""
	I1216 02:59:26.909761 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.909768 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:26.909776 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:26.909785 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:26.965217 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:26.965237 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:26.982549 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:26.982573 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:27.049273 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:27.041323   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.041704   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043346   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043928   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.045499   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:27.041323   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.041704   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043346   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043928   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.045499   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:27.049282 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:27.049293 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:27.112656 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:27.112677 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:29.642709 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:29.652965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:29.653051 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:29.681994 1848358 cri.go:89] found id: ""
	I1216 02:59:29.682008 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.682030 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:29.682037 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:29.682106 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:29.710335 1848358 cri.go:89] found id: ""
	I1216 02:59:29.710350 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.710357 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:29.710363 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:29.710454 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:29.737846 1848358 cri.go:89] found id: ""
	I1216 02:59:29.737861 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.737868 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:29.737873 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:29.737943 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:29.763917 1848358 cri.go:89] found id: ""
	I1216 02:59:29.763931 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.763938 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:29.763944 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:29.764015 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:29.788324 1848358 cri.go:89] found id: ""
	I1216 02:59:29.788338 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.788345 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:29.788351 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:29.788409 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:29.812477 1848358 cri.go:89] found id: ""
	I1216 02:59:29.812490 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.812497 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:29.812502 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:29.812561 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:29.840464 1848358 cri.go:89] found id: ""
	I1216 02:59:29.840479 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.840486 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:29.840495 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:29.840509 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:29.905495 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:29.897197   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.898006   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899547   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899856   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.901335   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:29.897197   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.898006   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899547   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899856   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.901335   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:29.905505 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:29.905515 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:29.967090 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:29.967110 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:29.999894 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:29.999910 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:30.095570 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:30.095596 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:32.614024 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:32.624941 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:32.625007 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:32.649578 1848358 cri.go:89] found id: ""
	I1216 02:59:32.649593 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.649601 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:32.649606 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:32.649665 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:32.678365 1848358 cri.go:89] found id: ""
	I1216 02:59:32.678379 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.678386 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:32.678391 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:32.678450 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:32.703205 1848358 cri.go:89] found id: ""
	I1216 02:59:32.703219 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.703226 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:32.703231 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:32.703295 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:32.727484 1848358 cri.go:89] found id: ""
	I1216 02:59:32.727499 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.727506 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:32.727511 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:32.727568 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:32.753092 1848358 cri.go:89] found id: ""
	I1216 02:59:32.753106 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.753113 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:32.753119 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:32.753178 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:32.781551 1848358 cri.go:89] found id: ""
	I1216 02:59:32.781565 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.781572 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:32.781577 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:32.781636 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:32.807153 1848358 cri.go:89] found id: ""
	I1216 02:59:32.807168 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.807176 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:32.807184 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:32.807199 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:32.863763 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:32.863782 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:32.880478 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:32.880495 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:32.950082 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:32.941362   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.942084   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.943575   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.944217   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.945946   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:32.941362   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.942084   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.943575   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.944217   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.945946   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:32.950092 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:32.950102 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:33.016099 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:33.016121 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:35.546066 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:35.557055 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:35.557115 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:35.582927 1848358 cri.go:89] found id: ""
	I1216 02:59:35.582951 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.582960 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:35.582965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:35.583033 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:35.608110 1848358 cri.go:89] found id: ""
	I1216 02:59:35.608124 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.608131 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:35.608141 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:35.608203 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:35.632465 1848358 cri.go:89] found id: ""
	I1216 02:59:35.632479 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.632485 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:35.632490 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:35.632555 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:35.661165 1848358 cri.go:89] found id: ""
	I1216 02:59:35.661179 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.661198 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:35.661204 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:35.661272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:35.686050 1848358 cri.go:89] found id: ""
	I1216 02:59:35.686064 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.686081 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:35.686087 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:35.686156 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:35.711189 1848358 cri.go:89] found id: ""
	I1216 02:59:35.711203 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.711210 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:35.711215 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:35.711276 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:35.735024 1848358 cri.go:89] found id: ""
	I1216 02:59:35.735072 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.735080 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:35.735089 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:35.735099 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:35.790017 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:35.790036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:35.807195 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:35.807212 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:35.870014 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:35.862369   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.862875   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864373   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864766   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.866205   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:35.862369   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.862875   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864373   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864766   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.866205   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:35.870024 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:35.870036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:35.933113 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:35.933134 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:38.460684 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:38.471131 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:38.471193 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:38.508160 1848358 cri.go:89] found id: ""
	I1216 02:59:38.508175 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.508183 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:38.508188 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:38.508257 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:38.540297 1848358 cri.go:89] found id: ""
	I1216 02:59:38.540312 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.540320 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:38.540324 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:38.540388 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:38.566230 1848358 cri.go:89] found id: ""
	I1216 02:59:38.566244 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.566252 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:38.566257 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:38.566321 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:38.591818 1848358 cri.go:89] found id: ""
	I1216 02:59:38.591832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.591839 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:38.591844 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:38.591911 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:38.618603 1848358 cri.go:89] found id: ""
	I1216 02:59:38.618617 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.618624 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:38.618629 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:38.618689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:38.643310 1848358 cri.go:89] found id: ""
	I1216 02:59:38.643324 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.643331 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:38.643337 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:38.643402 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:38.667065 1848358 cri.go:89] found id: ""
	I1216 02:59:38.667080 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.667087 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:38.667095 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:38.667106 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:38.699522 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:38.699540 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:38.757880 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:38.757898 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:38.774888 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:38.774903 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:38.842015 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:38.834115   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.834681   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836207   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836756   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.838251   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:38.834115   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.834681   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836207   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836756   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.838251   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:38.842025 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:38.842036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:41.405157 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:41.416379 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:41.416447 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:41.446560 1848358 cri.go:89] found id: ""
	I1216 02:59:41.446578 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.446596 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:41.446602 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:41.446675 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:41.483188 1848358 cri.go:89] found id: ""
	I1216 02:59:41.483202 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.483209 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:41.483213 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:41.483274 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:41.516110 1848358 cri.go:89] found id: ""
	I1216 02:59:41.516140 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.516147 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:41.516152 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:41.516218 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:41.540839 1848358 cri.go:89] found id: ""
	I1216 02:59:41.540853 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.540860 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:41.540866 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:41.540926 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:41.566596 1848358 cri.go:89] found id: ""
	I1216 02:59:41.566622 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.566629 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:41.566634 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:41.566706 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:41.590702 1848358 cri.go:89] found id: ""
	I1216 02:59:41.590717 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.590724 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:41.590729 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:41.590791 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:41.616252 1848358 cri.go:89] found id: ""
	I1216 02:59:41.616276 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.616283 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:41.616291 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:41.616303 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:41.645509 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:41.645525 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:41.704141 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:41.704159 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:41.721706 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:41.721725 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:41.783974 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:41.776246   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.776782   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778294   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778717   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.780191   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:41.776246   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.776782   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778294   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778717   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.780191   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:41.783984 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:41.784019 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:44.346692 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:44.357118 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:44.357181 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:44.382575 1848358 cri.go:89] found id: ""
	I1216 02:59:44.382589 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.382596 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:44.382601 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:44.382666 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:44.407349 1848358 cri.go:89] found id: ""
	I1216 02:59:44.407363 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.407370 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:44.407375 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:44.407442 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:44.438660 1848358 cri.go:89] found id: ""
	I1216 02:59:44.438674 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.438681 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:44.438693 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:44.438748 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:44.483154 1848358 cri.go:89] found id: ""
	I1216 02:59:44.483168 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.483175 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:44.483180 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:44.483239 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:44.512253 1848358 cri.go:89] found id: ""
	I1216 02:59:44.512267 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.512274 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:44.512283 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:44.512341 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:44.537396 1848358 cri.go:89] found id: ""
	I1216 02:59:44.537410 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.537427 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:44.537434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:44.537510 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:44.562261 1848358 cri.go:89] found id: ""
	I1216 02:59:44.562275 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.562283 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:44.562291 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:44.562300 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:44.630850 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:44.630877 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:44.660268 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:44.660294 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:44.721274 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:44.721294 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:44.738464 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:44.738482 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:44.804552 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:44.796057   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.796830   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798532   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798943   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.800543   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:44.796057   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.796830   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798532   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798943   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.800543   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:47.304816 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:47.315117 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:47.315178 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:47.344292 1848358 cri.go:89] found id: ""
	I1216 02:59:47.344306 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.344314 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:47.344319 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:47.344381 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:47.367920 1848358 cri.go:89] found id: ""
	I1216 02:59:47.367934 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.367942 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:47.367947 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:47.368017 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:47.392383 1848358 cri.go:89] found id: ""
	I1216 02:59:47.392397 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.392404 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:47.392409 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:47.392473 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:47.415620 1848358 cri.go:89] found id: ""
	I1216 02:59:47.415634 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.415641 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:47.415646 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:47.415703 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:47.454281 1848358 cri.go:89] found id: ""
	I1216 02:59:47.454295 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.454302 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:47.454308 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:47.454367 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:47.487808 1848358 cri.go:89] found id: ""
	I1216 02:59:47.487822 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.487829 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:47.487834 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:47.487893 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:47.515510 1848358 cri.go:89] found id: ""
	I1216 02:59:47.515523 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.515531 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:47.515538 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:47.515551 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:47.582935 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:47.574325   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.575137   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.576881   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.577372   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.578856   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:47.574325   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.575137   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.576881   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.577372   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.578856   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:47.582951 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:47.582963 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:47.644716 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:47.644735 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:47.673055 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:47.673071 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:47.729448 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:47.729467 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:50.247207 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:50.257829 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:50.257894 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:50.282406 1848358 cri.go:89] found id: ""
	I1216 02:59:50.282422 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.282429 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:50.282435 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:50.282497 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:50.307428 1848358 cri.go:89] found id: ""
	I1216 02:59:50.307442 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.307450 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:50.307455 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:50.307514 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:50.332093 1848358 cri.go:89] found id: ""
	I1216 02:59:50.332107 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.332114 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:50.332120 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:50.332179 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:50.357137 1848358 cri.go:89] found id: ""
	I1216 02:59:50.357151 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.357158 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:50.357163 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:50.357227 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:50.380923 1848358 cri.go:89] found id: ""
	I1216 02:59:50.380938 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.380945 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:50.380950 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:50.381008 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:50.404673 1848358 cri.go:89] found id: ""
	I1216 02:59:50.404687 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.404695 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:50.404700 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:50.404762 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:50.428594 1848358 cri.go:89] found id: ""
	I1216 02:59:50.428609 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.428616 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:50.428624 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:50.428634 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:50.511977 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:50.503194   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.503744   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505371   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505827   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.507476   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:50.503194   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.503744   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505371   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505827   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.507476   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:50.511987 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:50.511998 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:50.575372 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:50.575394 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:50.603193 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:50.603215 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:50.660351 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:50.660370 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:53.177329 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:53.187812 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:53.187876 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:53.212765 1848358 cri.go:89] found id: ""
	I1216 02:59:53.212780 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.212787 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:53.212792 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:53.212855 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:53.237571 1848358 cri.go:89] found id: ""
	I1216 02:59:53.237584 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.237591 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:53.237596 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:53.237657 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:53.261989 1848358 cri.go:89] found id: ""
	I1216 02:59:53.262003 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.262010 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:53.262015 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:53.262077 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:53.291843 1848358 cri.go:89] found id: ""
	I1216 02:59:53.291857 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.291864 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:53.291869 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:53.291929 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:53.316569 1848358 cri.go:89] found id: ""
	I1216 02:59:53.316583 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.316590 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:53.316595 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:53.316655 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:53.340200 1848358 cri.go:89] found id: ""
	I1216 02:59:53.340214 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.340221 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:53.340226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:53.340284 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:53.364767 1848358 cri.go:89] found id: ""
	I1216 02:59:53.364782 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.364789 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:53.364796 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:53.364806 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:53.423540 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:53.423559 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:53.440975 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:53.440990 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:53.518181 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:53.509741   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.510408   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512145   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512724   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.514366   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:53.509741   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.510408   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512145   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512724   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.514366   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:53.518190 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:53.518201 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:53.580231 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:53.580250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:56.109099 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:56.119430 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:56.119493 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:56.144050 1848358 cri.go:89] found id: ""
	I1216 02:59:56.144064 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.144072 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:56.144077 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:56.144137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:56.168768 1848358 cri.go:89] found id: ""
	I1216 02:59:56.168783 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.168790 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:56.168794 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:56.168858 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:56.193611 1848358 cri.go:89] found id: ""
	I1216 02:59:56.193625 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.193633 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:56.193637 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:56.193694 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:56.218383 1848358 cri.go:89] found id: ""
	I1216 02:59:56.218396 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.218415 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:56.218420 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:56.218532 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:56.244850 1848358 cri.go:89] found id: ""
	I1216 02:59:56.244864 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.244871 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:56.244888 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:56.244960 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:56.272142 1848358 cri.go:89] found id: ""
	I1216 02:59:56.272167 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.272174 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:56.272181 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:56.272252 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:56.296464 1848358 cri.go:89] found id: ""
	I1216 02:59:56.296478 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.296485 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:56.296493 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:56.296503 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:56.351797 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:56.351818 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:56.368635 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:56.368655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:56.433327 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:56.425076   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.425853   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.427469   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.428121   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.429570   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:56.425076   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.425853   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.427469   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.428121   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.429570   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:56.433336 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:56.433346 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:56.509361 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:56.509380 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:59.037187 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:59.047286 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:59.047351 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:59.072817 1848358 cri.go:89] found id: ""
	I1216 02:59:59.072831 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.072838 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:59.072843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:59.072914 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:59.098681 1848358 cri.go:89] found id: ""
	I1216 02:59:59.098696 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.098708 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:59.098713 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:59.098774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:59.124932 1848358 cri.go:89] found id: ""
	I1216 02:59:59.124945 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.124953 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:59.124958 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:59.125017 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:59.149561 1848358 cri.go:89] found id: ""
	I1216 02:59:59.149575 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.149581 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:59.149586 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:59.149646 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:59.174402 1848358 cri.go:89] found id: ""
	I1216 02:59:59.174417 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.174426 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:59.174431 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:59.174497 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:59.199717 1848358 cri.go:89] found id: ""
	I1216 02:59:59.199732 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.199740 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:59.199745 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:59.199812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:59.225754 1848358 cri.go:89] found id: ""
	I1216 02:59:59.225768 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.225787 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:59.225795 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:59.225806 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:59.288033 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:59.288058 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:59.316114 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:59.316130 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:59.373962 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:59.373981 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:59.390958 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:59.390978 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:59.466112 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:59.455145   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.456493   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458087   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458370   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.462047   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:59.455145   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.456493   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458087   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458370   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.462047   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:01.968417 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:01.996618 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:01.996689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:02.075342 1848358 cri.go:89] found id: ""
	I1216 03:00:02.075366 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.075373 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:02.075379 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:02.075457 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:02.107614 1848358 cri.go:89] found id: ""
	I1216 03:00:02.107629 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.107637 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:02.107646 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:02.107720 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:02.137752 1848358 cri.go:89] found id: ""
	I1216 03:00:02.137768 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.137776 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:02.137782 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:02.137853 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:02.169435 1848358 cri.go:89] found id: ""
	I1216 03:00:02.169452 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.169459 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:02.169465 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:02.169546 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:02.198391 1848358 cri.go:89] found id: ""
	I1216 03:00:02.198423 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.198431 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:02.198438 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:02.198511 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:02.227862 1848358 cri.go:89] found id: ""
	I1216 03:00:02.227877 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.227885 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:02.227891 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:02.227959 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:02.256236 1848358 cri.go:89] found id: ""
	I1216 03:00:02.256251 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.256269 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:02.256278 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:02.256290 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:02.315559 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:02.315582 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:02.334230 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:02.334248 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:02.404903 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:02.395443   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.396222   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.398711   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.399382   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.400828   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:02.395443   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.396222   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.398711   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.399382   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.400828   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:02.404912 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:02.404923 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:02.469074 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:02.469095 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:05.003993 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:05.018300 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:05.018420 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:05.047301 1848358 cri.go:89] found id: ""
	I1216 03:00:05.047316 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.047323 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:05.047335 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:05.047400 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:05.072682 1848358 cri.go:89] found id: ""
	I1216 03:00:05.072697 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.072704 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:05.072709 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:05.072770 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:05.102478 1848358 cri.go:89] found id: ""
	I1216 03:00:05.102493 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.102502 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:05.102507 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:05.102578 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:05.132728 1848358 cri.go:89] found id: ""
	I1216 03:00:05.132743 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.132750 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:05.132756 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:05.132825 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:05.158706 1848358 cri.go:89] found id: ""
	I1216 03:00:05.158721 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.158728 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:05.158733 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:05.158795 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:05.184666 1848358 cri.go:89] found id: ""
	I1216 03:00:05.184681 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.184688 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:05.184694 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:05.184756 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:05.216197 1848358 cri.go:89] found id: ""
	I1216 03:00:05.216213 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.216221 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:05.216229 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:05.216239 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:05.278419 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:05.278439 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:05.309753 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:05.309771 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:05.366862 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:05.366880 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:05.384427 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:05.384446 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:05.452157 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:05.443910   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.444698   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446307   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446727   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.448188   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:05.443910   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.444698   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446307   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446727   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.448188   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:07.952402 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:07.967145 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:07.967225 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:07.998164 1848358 cri.go:89] found id: ""
	I1216 03:00:07.998178 1848358 logs.go:282] 0 containers: []
	W1216 03:00:07.998185 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:07.998191 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:07.998251 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:08.032873 1848358 cri.go:89] found id: ""
	I1216 03:00:08.032889 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.032896 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:08.032901 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:08.032964 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:08.059832 1848358 cri.go:89] found id: ""
	I1216 03:00:08.059846 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.059854 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:08.059859 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:08.059933 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:08.087232 1848358 cri.go:89] found id: ""
	I1216 03:00:08.087246 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.087253 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:08.087258 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:08.087316 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:08.114253 1848358 cri.go:89] found id: ""
	I1216 03:00:08.114267 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.114274 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:08.114280 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:08.114343 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:08.139972 1848358 cri.go:89] found id: ""
	I1216 03:00:08.139987 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.139994 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:08.139999 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:08.140141 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:08.165613 1848358 cri.go:89] found id: ""
	I1216 03:00:08.165628 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.165637 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:08.165645 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:08.165655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:08.221696 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:08.221715 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:08.240189 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:08.240206 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:08.320945 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:08.311750   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.312401   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314217   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314799   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.316399   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:08.311750   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.312401   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314217   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314799   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.316399   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:08.320954 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:08.320964 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:08.384243 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:08.384275 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:10.913864 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:10.926998 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:10.927108 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:10.962440 1848358 cri.go:89] found id: ""
	I1216 03:00:10.962454 1848358 logs.go:282] 0 containers: []
	W1216 03:00:10.962461 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:10.962466 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:10.962526 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:11.004569 1848358 cri.go:89] found id: ""
	I1216 03:00:11.004589 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.004598 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:11.004610 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:11.005096 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:11.034401 1848358 cri.go:89] found id: ""
	I1216 03:00:11.034415 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.034429 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:11.034434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:11.034508 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:11.065292 1848358 cri.go:89] found id: ""
	I1216 03:00:11.065309 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.065317 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:11.065325 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:11.065394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:11.092043 1848358 cri.go:89] found id: ""
	I1216 03:00:11.092057 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.092065 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:11.092070 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:11.092163 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:11.121914 1848358 cri.go:89] found id: ""
	I1216 03:00:11.121929 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.121936 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:11.121942 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:11.122014 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:11.147863 1848358 cri.go:89] found id: ""
	I1216 03:00:11.147879 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.147886 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:11.147894 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:11.147906 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:11.213267 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:11.213287 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:11.231545 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:11.231561 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:11.303516 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:11.294839   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.295358   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.296660   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.297169   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.298964   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:11.294839   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.295358   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.296660   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.297169   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.298964   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:11.303525 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:11.303544 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:11.375152 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:11.375181 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:13.905997 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:13.916685 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:13.916754 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:13.946670 1848358 cri.go:89] found id: ""
	I1216 03:00:13.946698 1848358 logs.go:282] 0 containers: []
	W1216 03:00:13.946705 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:13.946711 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:13.946782 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:13.978544 1848358 cri.go:89] found id: ""
	I1216 03:00:13.978558 1848358 logs.go:282] 0 containers: []
	W1216 03:00:13.978565 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:13.978570 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:13.978630 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:14.010045 1848358 cri.go:89] found id: ""
	I1216 03:00:14.010060 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.010068 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:14.010073 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:14.010148 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:14.039695 1848358 cri.go:89] found id: ""
	I1216 03:00:14.039709 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.039717 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:14.039722 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:14.039786 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:14.065918 1848358 cri.go:89] found id: ""
	I1216 03:00:14.065932 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.065939 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:14.065944 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:14.066002 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:14.092594 1848358 cri.go:89] found id: ""
	I1216 03:00:14.092607 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.092615 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:14.092620 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:14.092684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:14.117022 1848358 cri.go:89] found id: ""
	I1216 03:00:14.117036 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.117043 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:14.117052 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:14.117063 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:14.145392 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:14.145409 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:14.201319 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:14.201338 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:14.218382 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:14.218397 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:14.286945 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:14.279281   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.279802   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281416   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281996   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.283003   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:14.279281   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.279802   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281416   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281996   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.283003   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:14.286956 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:14.286968 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:16.848830 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:16.859224 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:16.859288 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:16.900559 1848358 cri.go:89] found id: ""
	I1216 03:00:16.900573 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.900580 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:16.900586 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:16.900660 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:16.925198 1848358 cri.go:89] found id: ""
	I1216 03:00:16.925213 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.925221 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:16.925226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:16.925288 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:16.968532 1848358 cri.go:89] found id: ""
	I1216 03:00:16.968545 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.968552 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:16.968557 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:16.968620 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:17.001327 1848358 cri.go:89] found id: ""
	I1216 03:00:17.001343 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.001351 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:17.001357 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:17.001427 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:17.029828 1848358 cri.go:89] found id: ""
	I1216 03:00:17.029843 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.029850 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:17.029855 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:17.029917 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:17.055865 1848358 cri.go:89] found id: ""
	I1216 03:00:17.055880 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.055887 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:17.055892 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:17.055956 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:17.081782 1848358 cri.go:89] found id: ""
	I1216 03:00:17.081796 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.081804 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:17.081812 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:17.081823 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:17.137664 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:17.137684 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:17.155387 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:17.155413 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:17.223693 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:17.215814   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.216359   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.217875   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.218284   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.219788   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:17.215814   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.216359   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.217875   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.218284   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.219788   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:17.223704 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:17.223715 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:17.285895 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:17.285915 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:19.819792 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:19.830531 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:19.830595 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:19.855374 1848358 cri.go:89] found id: ""
	I1216 03:00:19.855388 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.855395 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:19.855400 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:19.855459 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:19.880613 1848358 cri.go:89] found id: ""
	I1216 03:00:19.880627 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.880634 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:19.880639 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:19.880701 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:19.905217 1848358 cri.go:89] found id: ""
	I1216 03:00:19.905231 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.905238 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:19.905243 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:19.905306 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:19.938230 1848358 cri.go:89] found id: ""
	I1216 03:00:19.938245 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.938252 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:19.938257 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:19.938318 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:19.972308 1848358 cri.go:89] found id: ""
	I1216 03:00:19.972322 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.972330 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:19.972335 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:19.972396 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:20.009826 1848358 cri.go:89] found id: ""
	I1216 03:00:20.009843 1848358 logs.go:282] 0 containers: []
	W1216 03:00:20.009851 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:20.009857 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:20.009931 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:20.047016 1848358 cri.go:89] found id: ""
	I1216 03:00:20.047031 1848358 logs.go:282] 0 containers: []
	W1216 03:00:20.047075 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:20.047084 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:20.047095 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:20.105420 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:20.105444 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:20.123806 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:20.123824 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:20.193387 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:20.184716   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.185705   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.187519   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.188104   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.189189   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:20.184716   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.185705   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.187519   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.188104   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.189189   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:20.193399 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:20.193410 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:20.256212 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:20.256232 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:22.788953 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:22.799143 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:22.799205 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:22.824912 1848358 cri.go:89] found id: ""
	I1216 03:00:22.824926 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.824933 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:22.824938 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:22.824999 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:22.848993 1848358 cri.go:89] found id: ""
	I1216 03:00:22.849007 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.849014 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:22.849019 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:22.849077 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:22.873445 1848358 cri.go:89] found id: ""
	I1216 03:00:22.873467 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.873476 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:22.873481 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:22.873548 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:22.898928 1848358 cri.go:89] found id: ""
	I1216 03:00:22.898952 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.898960 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:22.898965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:22.899088 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:22.924441 1848358 cri.go:89] found id: ""
	I1216 03:00:22.924455 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.924462 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:22.924471 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:22.924536 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:22.972165 1848358 cri.go:89] found id: ""
	I1216 03:00:22.972187 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.972194 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:22.972200 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:22.972272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:23.007998 1848358 cri.go:89] found id: ""
	I1216 03:00:23.008014 1848358 logs.go:282] 0 containers: []
	W1216 03:00:23.008021 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:23.008030 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:23.008041 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:23.074846 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:23.065592   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.066370   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.068447   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.069048   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.070772   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:23.065592   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.066370   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.068447   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.069048   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.070772   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:23.074856 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:23.074867 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:23.141968 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:23.141990 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:23.170755 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:23.170772 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:23.229156 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:23.229176 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:25.746547 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:25.757092 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:25.757177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:25.781744 1848358 cri.go:89] found id: ""
	I1216 03:00:25.781758 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.781765 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:25.781770 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:25.781829 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:25.810185 1848358 cri.go:89] found id: ""
	I1216 03:00:25.810200 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.810207 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:25.810212 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:25.810273 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:25.837797 1848358 cri.go:89] found id: ""
	I1216 03:00:25.837810 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.837818 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:25.837822 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:25.837881 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:25.864444 1848358 cri.go:89] found id: ""
	I1216 03:00:25.864466 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.864474 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:25.864479 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:25.864537 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:25.889170 1848358 cri.go:89] found id: ""
	I1216 03:00:25.889185 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.889192 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:25.889197 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:25.889253 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:25.913381 1848358 cri.go:89] found id: ""
	I1216 03:00:25.913396 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.913403 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:25.913409 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:25.913468 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:25.956168 1848358 cri.go:89] found id: ""
	I1216 03:00:25.956184 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.956191 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:25.956199 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:25.956209 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:25.987017 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:25.987032 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:26.056762 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:26.056783 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:26.074582 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:26.074599 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:26.142533 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:26.133438   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.134117   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.135700   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.136346   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.138045   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:26.133438   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.134117   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.135700   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.136346   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.138045   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:26.142543 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:26.142554 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:28.704757 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:28.715093 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:28.715171 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:28.756309 1848358 cri.go:89] found id: ""
	I1216 03:00:28.756339 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.756350 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:28.756355 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:28.756442 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:28.786013 1848358 cri.go:89] found id: ""
	I1216 03:00:28.786027 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.786033 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:28.786038 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:28.786099 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:28.813243 1848358 cri.go:89] found id: ""
	I1216 03:00:28.813257 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.813264 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:28.813269 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:28.813329 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:28.837627 1848358 cri.go:89] found id: ""
	I1216 03:00:28.837642 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.837649 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:28.837654 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:28.837714 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:28.862744 1848358 cri.go:89] found id: ""
	I1216 03:00:28.862768 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.862775 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:28.862780 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:28.862850 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:28.888763 1848358 cri.go:89] found id: ""
	I1216 03:00:28.888777 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.888784 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:28.888790 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:28.888851 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:28.913212 1848358 cri.go:89] found id: ""
	I1216 03:00:28.913226 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.913234 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:28.913242 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:28.913252 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:28.973937 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:28.973957 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:28.995906 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:28.995924 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:29.068971 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:29.060478   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.060883   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062455   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062780   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.064407   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:29.060478   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.060883   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062455   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062780   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.064407   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:29.068980 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:29.068994 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:29.132688 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:29.132707 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:31.666915 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:31.677125 1848358 kubeadm.go:602] duration metric: took 4m1.758576282s to restartPrimaryControlPlane
	W1216 03:00:31.677186 1848358 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:00:31.677266 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:00:32.091488 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:00:32.105369 1848358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:00:32.113490 1848358 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:00:32.113550 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:00:32.122054 1848358 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:00:32.122064 1848358 kubeadm.go:158] found existing configuration files:
	
	I1216 03:00:32.122120 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 03:00:32.130622 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:00:32.130682 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:00:32.138437 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 03:00:32.146797 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:00:32.146863 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:00:32.155178 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 03:00:32.163734 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:00:32.163795 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:00:32.171993 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 03:00:32.180028 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:00:32.180097 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:00:32.188091 1848358 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:00:32.228785 1848358 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:00:32.228977 1848358 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:00:32.306472 1848358 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:00:32.306542 1848358 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:00:32.306577 1848358 kubeadm.go:319] OS: Linux
	I1216 03:00:32.306630 1848358 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:00:32.306684 1848358 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:00:32.306730 1848358 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:00:32.306783 1848358 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:00:32.306837 1848358 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:00:32.306884 1848358 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:00:32.306934 1848358 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:00:32.306987 1848358 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:00:32.307033 1848358 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:00:32.370232 1848358 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:00:32.370342 1848358 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:00:32.370445 1848358 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:00:32.376940 1848358 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:00:32.380870 1848358 out.go:252]   - Generating certificates and keys ...
	I1216 03:00:32.380973 1848358 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:00:32.381073 1848358 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:00:32.381166 1848358 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:00:32.381227 1848358 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:00:32.381296 1848358 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:00:32.381349 1848358 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:00:32.381411 1848358 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:00:32.381496 1848358 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:00:32.381600 1848358 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:00:32.381683 1848358 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:00:32.381723 1848358 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:00:32.381783 1848358 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:00:32.587867 1848358 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:00:32.728887 1848358 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:00:33.127071 1848358 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:00:33.632583 1848358 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:00:33.851925 1848358 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:00:33.852650 1848358 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:00:33.855273 1848358 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:00:33.858613 1848358 out.go:252]   - Booting up control plane ...
	I1216 03:00:33.858712 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:00:33.858788 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:00:33.858854 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:00:33.878797 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:00:33.879802 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:00:33.887340 1848358 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:00:33.887615 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:00:33.887656 1848358 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:00:34.023686 1848358 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:00:34.027990 1848358 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:04:34.028846 1848358 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005338087s
	I1216 03:04:34.028875 1848358 kubeadm.go:319] 
	I1216 03:04:34.028931 1848358 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:04:34.028963 1848358 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:04:34.029067 1848358 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:04:34.029071 1848358 kubeadm.go:319] 
	I1216 03:04:34.029175 1848358 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:04:34.029206 1848358 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:04:34.029236 1848358 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:04:34.029239 1848358 kubeadm.go:319] 
	I1216 03:04:34.033654 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:04:34.034083 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:04:34.034191 1848358 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:34.034426 1848358 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 03:04:34.034431 1848358 kubeadm.go:319] 
	I1216 03:04:34.034499 1848358 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 03:04:34.034613 1848358 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005338087s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 03:04:34.034714 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:04:34.442103 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:34.455899 1848358 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:34.455954 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:34.464166 1848358 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:34.464176 1848358 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:34.464227 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 03:04:34.472141 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:34.472197 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:34.479703 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 03:04:34.487496 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:34.487553 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:34.495305 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 03:04:34.504218 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:34.504277 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:34.512085 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 03:04:34.520037 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:34.520091 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:34.527590 1848358 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:34.569546 1848358 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:04:34.569597 1848358 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:34.648580 1848358 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:34.648645 1848358 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:04:34.648680 1848358 kubeadm.go:319] OS: Linux
	I1216 03:04:34.648724 1848358 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:34.648775 1848358 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:04:34.648847 1848358 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:34.648894 1848358 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:34.648941 1848358 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:34.648988 1848358 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:34.649031 1848358 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:34.649078 1848358 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:34.649123 1848358 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:04:34.718553 1848358 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:34.718667 1848358 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:34.718765 1848358 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:04:34.725198 1848358 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:04:34.730521 1848358 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:34.730604 1848358 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:34.730670 1848358 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:34.730745 1848358 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:04:34.730804 1848358 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:04:34.730873 1848358 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:04:34.730926 1848358 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:04:34.730988 1848358 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:04:34.731077 1848358 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:04:34.731151 1848358 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:04:34.731222 1848358 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:04:34.731258 1848358 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:04:34.731313 1848358 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:34.775823 1848358 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:35.226979 1848358 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:04:35.500835 1848358 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:35.803186 1848358 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:35.922858 1848358 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:35.923646 1848358 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:35.926392 1848358 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:35.929487 1848358 out.go:252]   - Booting up control plane ...
	I1216 03:04:35.929587 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:35.929670 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:35.930420 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:35.952397 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:35.952501 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:04:35.960726 1848358 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:04:35.961037 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:35.961210 1848358 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:36.110987 1848358 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:04:36.111155 1848358 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:08:36.111000 1848358 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000244075s
	I1216 03:08:36.111025 1848358 kubeadm.go:319] 
	I1216 03:08:36.111095 1848358 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:08:36.111126 1848358 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:08:36.111231 1848358 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:08:36.111235 1848358 kubeadm.go:319] 
	I1216 03:08:36.111337 1848358 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:08:36.111368 1848358 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:08:36.111397 1848358 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:08:36.111401 1848358 kubeadm.go:319] 
	I1216 03:08:36.115184 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:08:36.115598 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:08:36.115704 1848358 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:08:36.115939 1848358 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 03:08:36.115944 1848358 kubeadm.go:319] 
	I1216 03:08:36.116012 1848358 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 03:08:36.116067 1848358 kubeadm.go:403] duration metric: took 12m6.232765178s to StartCluster
	I1216 03:08:36.116112 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:08:36.116177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:08:36.140414 1848358 cri.go:89] found id: ""
	I1216 03:08:36.140430 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.140437 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:08:36.140442 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:08:36.140504 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:08:36.164577 1848358 cri.go:89] found id: ""
	I1216 03:08:36.164590 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.164598 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:08:36.164604 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:08:36.164663 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:08:36.188307 1848358 cri.go:89] found id: ""
	I1216 03:08:36.188321 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.188328 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:08:36.188333 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:08:36.188394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:08:36.213037 1848358 cri.go:89] found id: ""
	I1216 03:08:36.213050 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.213057 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:08:36.213062 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:08:36.213121 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:08:36.239675 1848358 cri.go:89] found id: ""
	I1216 03:08:36.239690 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.239698 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:08:36.239704 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:08:36.239762 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:08:36.262932 1848358 cri.go:89] found id: ""
	I1216 03:08:36.262947 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.262955 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:08:36.262960 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:08:36.263018 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:08:36.288318 1848358 cri.go:89] found id: ""
	I1216 03:08:36.288332 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.288340 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:08:36.288349 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:08:36.288358 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:08:36.350247 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:08:36.350267 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:08:36.380644 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:08:36.380660 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:08:36.436449 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:08:36.436466 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:08:36.457199 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:08:36.457222 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:08:36.526010 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:08:36.517899   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.518716   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520309   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520628   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.522143   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:08:36.517899   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.518716   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520309   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520628   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.522143   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1216 03:08:36.526029 1848358 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 03:08:36.526065 1848358 out.go:285] * 
	W1216 03:08:36.526124 1848358 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:08:36.526137 1848358 out.go:285] * 
	W1216 03:08:36.528271 1848358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:08:36.533177 1848358 out.go:203] 
	W1216 03:08:36.537050 1848358 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:08:36.537112 1848358 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 03:08:36.537136 1848358 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 03:08:36.540537 1848358 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418983774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418998239Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419036154Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419097175Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419108202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419119509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419128805Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419140062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419155980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419187823Z" level=info msg="Connect containerd service"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419497668Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.420076931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439480285Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439840672Z" level=info msg="Start recovering state"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439686821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.443248018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513022632Z" level=info msg="Start event monitor"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513204659Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513279259Z" level=info msg="Start streaming server"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513342856Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513405935Z" level=info msg="runtime interface starting up..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513471920Z" level=info msg="starting plugins..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513539119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:56:28 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.516797790Z" level=info msg="containerd successfully booted in 0.120064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:08:37.724944   20988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:37.725705   20988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:37.727329   20988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:37.727866   20988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:37.729439   20988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 03:08:37 up  8:51,  0 user,  load average: 0.73, 0.37, 0.56
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 03:08:34 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:08:34 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 16 03:08:34 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:34 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:34 functional-389759 kubelet[20791]: E1216 03:08:34.981330   20791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:08:34 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:08:34 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:08:35 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 16 03:08:35 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:35 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:35 functional-389759 kubelet[20797]: E1216 03:08:35.731919   20797 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:08:35 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:08:35 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:08:36 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 03:08:36 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:36 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:36 functional-389759 kubelet[20875]: E1216 03:08:36.497783   20875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:08:36 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:08:36 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:08:37 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 03:08:37 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:37 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:37 functional-389759 kubelet[20905]: E1216 03:08:37.260550   20905 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:08:37 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:08:37 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (342.868078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (732.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-389759 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-389759 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (72.557254ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-389759 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (314.540659ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-389759 logs -n 25: (1.015792218s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr                                                  │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls --format table --alsologtostderr                                                                                             │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ update-context │ functional-853651 update-context --alsologtostderr -v=2                                                                                                 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ image          │ functional-853651 image ls                                                                                                                              │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ delete         │ -p functional-853651                                                                                                                                    │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
	│ start          │ -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │                     │
	│ start          │ -p functional-389759 --alsologtostderr -v=8                                                                                                             │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:50 UTC │                     │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add registry.k8s.io/pause:latest                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache add minikube-local-cache-test:functional-389759                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ functional-389759 cache delete minikube-local-cache-test:functional-389759                                                                              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl images                                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	│ cache          │ functional-389759 cache reload                                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh            │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ kubectl        │ functional-389759 kubectl -- --context functional-389759 get pods                                                                                       │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	│ start          │ -p functional-389759 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:56:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:56:25.844373 1848358 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:56:25.844466 1848358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:56:25.844470 1848358 out.go:374] Setting ErrFile to fd 2...
	I1216 02:56:25.844474 1848358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:56:25.844836 1848358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:56:25.845570 1848358 out.go:368] Setting JSON to false
	I1216 02:56:25.846389 1848358 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":31130,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:56:25.846449 1848358 start.go:143] virtualization:  
	I1216 02:56:25.849867 1848358 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:56:25.854549 1848358 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:56:25.854652 1848358 notify.go:221] Checking for updates...
	I1216 02:56:25.860318 1848358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:56:25.863452 1848358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:56:25.866454 1848358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:56:25.869328 1848358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:56:25.872192 1848358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:56:25.875771 1848358 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:56:25.875865 1848358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:56:25.910877 1848358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:56:25.910989 1848358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:56:25.979751 1848358 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 02:56:25.969640801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:56:25.979847 1848358 docker.go:319] overlay module found
	I1216 02:56:25.984585 1848358 out.go:179] * Using the docker driver based on existing profile
	I1216 02:56:25.987331 1848358 start.go:309] selected driver: docker
	I1216 02:56:25.987339 1848358 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:25.987425 1848358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:56:25.987525 1848358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:56:26.045497 1848358 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 02:56:26.035789712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:56:26.045925 1848358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:56:26.045948 1848358 cni.go:84] Creating CNI manager for ""
	I1216 02:56:26.045996 1848358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:56:26.046044 1848358 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:26.049158 1848358 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:56:26.052095 1848358 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:56:26.055176 1848358 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:56:26.058088 1848358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:56:26.058108 1848358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:56:26.058178 1848358 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:56:26.058195 1848358 cache.go:65] Caching tarball of preloaded images
	I1216 02:56:26.058305 1848358 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:56:26.058312 1848358 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:56:26.058447 1848358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:56:26.078911 1848358 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:56:26.078923 1848358 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:56:26.078944 1848358 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:56:26.078984 1848358 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:56:26.079085 1848358 start.go:364] duration metric: took 83.453µs to acquireMachinesLock for "functional-389759"
	I1216 02:56:26.079107 1848358 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:56:26.079112 1848358 fix.go:54] fixHost starting: 
	I1216 02:56:26.079431 1848358 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:56:26.097178 1848358 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:56:26.097205 1848358 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:56:26.100419 1848358 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:56:26.100450 1848358 machine.go:94] provisionDockerMachine start ...
	I1216 02:56:26.100545 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.118508 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.118832 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.118839 1848358 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:56:26.259148 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:56:26.259164 1848358 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:56:26.259234 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.277500 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.277820 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.277829 1848358 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:56:26.421165 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:56:26.421257 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.440349 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.440644 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.440657 1848358 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:56:26.579508 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:56:26.579533 1848358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:56:26.579555 1848358 ubuntu.go:190] setting up certificates
	I1216 02:56:26.579573 1848358 provision.go:84] configureAuth start
	I1216 02:56:26.579642 1848358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:56:26.598860 1848358 provision.go:143] copyHostCerts
	I1216 02:56:26.598936 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:56:26.598944 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:56:26.599024 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:56:26.599152 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:56:26.599157 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:56:26.599183 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:56:26.599298 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:56:26.599302 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:56:26.599329 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:56:26.599373 1848358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:56:26.772331 1848358 provision.go:177] copyRemoteCerts
	I1216 02:56:26.772384 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:56:26.772421 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.790833 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:26.886672 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:56:26.903453 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:56:26.920711 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 02:56:26.938516 1848358 provision.go:87] duration metric: took 358.921052ms to configureAuth
	I1216 02:56:26.938533 1848358 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:56:26.938730 1848358 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:56:26.938735 1848358 machine.go:97] duration metric: took 838.281264ms to provisionDockerMachine
	I1216 02:56:26.938741 1848358 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:56:26.938751 1848358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:56:26.938797 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:56:26.938840 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.957601 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.062997 1848358 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:56:27.066589 1848358 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:56:27.066608 1848358 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:56:27.066618 1848358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:56:27.066672 1848358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:56:27.066743 1848358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:56:27.066818 1848358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:56:27.066859 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:56:27.074143 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:56:27.091762 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:56:27.109760 1848358 start.go:296] duration metric: took 171.004929ms for postStartSetup
	I1216 02:56:27.109845 1848358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:56:27.109892 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.130041 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.224282 1848358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:56:27.229295 1848358 fix.go:56] duration metric: took 1.150175721s for fixHost
	I1216 02:56:27.229312 1848358 start.go:83] releasing machines lock for "functional-389759", held for 1.150220136s
	I1216 02:56:27.229388 1848358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:56:27.246922 1848358 ssh_runner.go:195] Run: cat /version.json
	I1216 02:56:27.246974 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.247232 1848358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:56:27.247302 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.269086 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.280897 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.370924 1848358 ssh_runner.go:195] Run: systemctl --version
	I1216 02:56:27.469438 1848358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:56:27.474082 1848358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:56:27.474143 1848358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:56:27.482716 1848358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:56:27.482730 1848358 start.go:496] detecting cgroup driver to use...
	I1216 02:56:27.482760 1848358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:56:27.482821 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:56:27.499295 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:56:27.512730 1848358 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:56:27.512788 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:56:27.529084 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:56:27.542618 1848358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:56:27.669326 1848358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:56:27.809661 1848358 docker.go:234] disabling docker service ...
	I1216 02:56:27.809726 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:56:27.825238 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:56:27.839007 1848358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:56:27.961490 1848358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:56:28.085730 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:56:28.099793 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:56:28.115219 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:56:28.124904 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:56:28.134481 1848358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:56:28.134543 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:56:28.143714 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:56:28.152978 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:56:28.161801 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:56:28.170944 1848358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:56:28.179475 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:56:28.188723 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:56:28.197979 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:56:28.206949 1848358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:56:28.214520 1848358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:56:28.222338 1848358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:56:28.339529 1848358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:56:28.517809 1848358 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:56:28.517866 1848358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:56:28.522881 1848358 start.go:564] Will wait 60s for crictl version
	I1216 02:56:28.522937 1848358 ssh_runner.go:195] Run: which crictl
	I1216 02:56:28.526562 1848358 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:56:28.550167 1848358 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:56:28.550234 1848358 ssh_runner.go:195] Run: containerd --version
	I1216 02:56:28.570328 1848358 ssh_runner.go:195] Run: containerd --version
	I1216 02:56:28.596807 1848358 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:56:28.599682 1848358 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:56:28.616323 1848358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:56:28.623466 1848358 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 02:56:28.626293 1848358 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:56:28.626428 1848358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:56:28.626509 1848358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:56:28.651243 1848358 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:56:28.651255 1848358 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:56:28.651317 1848358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:56:28.676192 1848358 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:56:28.676203 1848358 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:56:28.676209 1848358 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:56:28.676312 1848358 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:56:28.676373 1848358 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:56:28.700239 1848358 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 02:56:28.700256 1848358 cni.go:84] Creating CNI manager for ""
	I1216 02:56:28.700264 1848358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:56:28.700272 1848358 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:56:28.700294 1848358 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:56:28.700400 1848358 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:56:28.700473 1848358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:56:28.708593 1848358 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:56:28.708655 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:56:28.716199 1848358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:56:28.728994 1848358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:56:28.742129 1848358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1216 02:56:28.754916 1848358 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:56:28.758765 1848358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:56:28.878289 1848358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:56:29.187922 1848358 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:56:29.187939 1848358 certs.go:195] generating shared ca certs ...
	I1216 02:56:29.187954 1848358 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:56:29.188132 1848358 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:56:29.188175 1848358 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:56:29.188182 1848358 certs.go:257] generating profile certs ...
	I1216 02:56:29.188282 1848358 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:56:29.188344 1848358 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:56:29.188398 1848358 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:56:29.188534 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:56:29.188573 1848358 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:56:29.188580 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:56:29.188615 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:56:29.188648 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:56:29.188671 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:56:29.188729 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:56:29.189416 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:56:29.212546 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:56:29.235562 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:56:29.257334 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:56:29.278410 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:56:29.297639 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:56:29.316055 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:56:29.333992 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:56:29.351802 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:56:29.370197 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:56:29.388624 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:56:29.406325 1848358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:56:29.419477 1848358 ssh_runner.go:195] Run: openssl version
	I1216 02:56:29.425780 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.433488 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:56:29.440931 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.444594 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.444652 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.485312 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:56:29.492681 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.499838 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:56:29.507532 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.511555 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.511621 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.552382 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:56:29.559682 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.566808 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:56:29.574430 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.578016 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.578077 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.619735 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:56:29.627282 1848358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:56:29.630975 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:56:29.674022 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:56:29.716546 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:56:29.760378 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:56:29.801675 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:56:29.842471 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:56:29.883311 1848358 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:29.883412 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:56:29.883472 1848358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:56:29.910518 1848358 cri.go:89] found id: ""
	I1216 02:56:29.910580 1848358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:56:29.918530 1848358 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:56:29.918539 1848358 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:56:29.918590 1848358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:56:29.926051 1848358 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:29.926594 1848358 kubeconfig.go:125] found "functional-389759" server: "https://192.168.49.2:8441"
	I1216 02:56:29.927850 1848358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:56:29.937055 1848358 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 02:41:54.425829655 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 02:56:28.747941655 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 02:56:29.937066 1848358 kubeadm.go:1161] stopping kube-system containers ...
	I1216 02:56:29.937078 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1216 02:56:29.937140 1848358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:56:29.975717 1848358 cri.go:89] found id: ""
	I1216 02:56:29.975778 1848358 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 02:56:29.994835 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:56:30.004346 1848358 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 02:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 02:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 16 02:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 02:46 /etc/kubernetes/scheduler.conf
	
	I1216 02:56:30.004430 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 02:56:30.041702 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 02:56:30.052507 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.052569 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:56:30.061943 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 02:56:30.073420 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.073488 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:56:30.083069 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 02:56:30.092935 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.092994 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:56:30.101587 1848358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:56:30.114178 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:30.166214 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.346212 1848358 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179973709s)
	I1216 02:56:31.346269 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.548322 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.601050 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.649581 1848358 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:56:31.649669 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:32.150228 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:32.649839 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:33.149820 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:33.650613 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:34.150733 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:34.649773 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:35.150705 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:35.649751 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:36.150703 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:36.650627 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:37.150392 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:37.649857 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:38.150375 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:38.650600 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:39.150146 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:39.649848 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:40.150319 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:40.650732 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:41.150402 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:41.649922 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:42.150742 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:42.649781 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:43.150590 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:43.650502 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:44.149866 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:44.649912 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:45.150004 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:45.650501 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:46.149734 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:46.649745 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:47.150639 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:47.649826 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:48.150565 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:48.649896 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:49.149744 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:49.650628 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:50.149885 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:50.649789 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:51.150643 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:51.649902 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:52.149806 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:52.650451 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:53.150140 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:53.649767 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:54.150751 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:54.650468 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:55.149878 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:55.650629 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:56.150781 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:56.650556 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:57.149864 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:57.650766 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:58.150741 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:58.649892 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:59.150551 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:59.650283 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:00.150247 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:00.650607 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:01.150638 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:01.650253 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:02.149858 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:02.650117 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:03.149960 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:03.649720 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:04.150726 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:04.650425 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:05.149866 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:05.649851 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:06.150611 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:06.650200 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:07.150444 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:07.650600 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:08.149853 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:08.650701 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:09.150579 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:09.649862 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:10.149858 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:10.650393 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:11.150022 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:11.649819 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:12.150562 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:12.649775 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:13.150489 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:13.650396 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:14.149848 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:14.649998 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:15.149945 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:15.649800 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:16.149886 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:16.650049 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:17.149847 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:17.649836 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:18.149898 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:18.649853 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:19.149883 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:19.649825 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:20.149732 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:20.650204 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:21.149852 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:21.649824 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:22.150472 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:22.650452 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:23.150780 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:23.650556 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:24.149887 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:24.650458 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:25.150518 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:25.650351 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:26.149849 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:26.650701 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:27.150612 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:27.650232 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:28.150399 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:28.650537 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:29.150626 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:29.650514 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:30.150439 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:30.650333 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:31.149886 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:31.650315 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:31.650394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:31.674930 1848358 cri.go:89] found id: ""
	I1216 02:57:31.674944 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.674951 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:31.674956 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:31.675016 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:31.714000 1848358 cri.go:89] found id: ""
	I1216 02:57:31.714013 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.714021 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:31.714026 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:31.714086 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:31.747840 1848358 cri.go:89] found id: ""
	I1216 02:57:31.747854 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.747861 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:31.747866 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:31.747926 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:31.773860 1848358 cri.go:89] found id: ""
	I1216 02:57:31.773874 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.773886 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:31.773891 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:31.773953 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:31.802242 1848358 cri.go:89] found id: ""
	I1216 02:57:31.802256 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.802263 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:31.802268 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:31.802327 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:31.827140 1848358 cri.go:89] found id: ""
	I1216 02:57:31.827170 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.827177 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:31.827183 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:31.827250 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:31.851813 1848358 cri.go:89] found id: ""
	I1216 02:57:31.851827 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.851834 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:31.851841 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:31.851852 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:31.907296 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:31.907315 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:31.924742 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:31.924759 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:31.990670 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:31.980837   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.981269   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.984774   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.985315   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.986770   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:31.980837   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.981269   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.984774   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.985315   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.986770   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:31.990681 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:31.990692 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:32.056720 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:32.056741 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:34.586741 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:34.596594 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:34.596656 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:34.624415 1848358 cri.go:89] found id: ""
	I1216 02:57:34.624430 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.624437 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:34.624454 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:34.624529 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:34.648856 1848358 cri.go:89] found id: ""
	I1216 02:57:34.648877 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.648884 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:34.648889 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:34.648952 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:34.674838 1848358 cri.go:89] found id: ""
	I1216 02:57:34.674852 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.674859 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:34.674864 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:34.674938 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:34.720068 1848358 cri.go:89] found id: ""
	I1216 02:57:34.720082 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.720089 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:34.720093 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:34.720152 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:34.749510 1848358 cri.go:89] found id: ""
	I1216 02:57:34.749525 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.749531 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:34.749541 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:34.749603 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:34.776711 1848358 cri.go:89] found id: ""
	I1216 02:57:34.776725 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.776732 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:34.776737 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:34.776797 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:34.801539 1848358 cri.go:89] found id: ""
	I1216 02:57:34.801552 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.801560 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:34.801568 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:34.801578 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:34.857992 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:34.858012 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:34.876290 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:34.876307 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:34.948190 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:34.939256   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.940046   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.941775   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.942456   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.944096   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:34.939256   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.940046   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.941775   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.942456   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.944096   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:34.948202 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:34.948213 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:35.015139 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:35.015162 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:37.549752 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:37.560125 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:37.560194 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:37.585130 1848358 cri.go:89] found id: ""
	I1216 02:57:37.585144 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.585151 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:37.585156 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:37.585216 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:37.610009 1848358 cri.go:89] found id: ""
	I1216 02:57:37.610023 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.610030 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:37.610035 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:37.610096 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:37.635414 1848358 cri.go:89] found id: ""
	I1216 02:57:37.635429 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.635436 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:37.635441 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:37.635503 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:37.660026 1848358 cri.go:89] found id: ""
	I1216 02:57:37.660046 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.660053 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:37.660059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:37.660119 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:37.702568 1848358 cri.go:89] found id: ""
	I1216 02:57:37.702583 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.702590 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:37.702595 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:37.702659 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:37.735671 1848358 cri.go:89] found id: ""
	I1216 02:57:37.735685 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.735693 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:37.735698 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:37.735766 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:37.764451 1848358 cri.go:89] found id: ""
	I1216 02:57:37.764465 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.764472 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:37.764481 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:37.764492 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:37.781790 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:37.781808 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:37.850130 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:37.841387   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.841981   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.843649   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845020   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845734   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:37.841387   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.841981   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.843649   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845020   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845734   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:37.850150 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:37.850161 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:37.912286 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:37.912306 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:37.947545 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:37.947561 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:40.504032 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:40.514627 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:40.514689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:40.543498 1848358 cri.go:89] found id: ""
	I1216 02:57:40.543513 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.543520 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:40.543524 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:40.543593 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:40.568106 1848358 cri.go:89] found id: ""
	I1216 02:57:40.568120 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.568127 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:40.568132 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:40.568190 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:40.592290 1848358 cri.go:89] found id: ""
	I1216 02:57:40.592304 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.592317 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:40.592322 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:40.592382 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:40.617796 1848358 cri.go:89] found id: ""
	I1216 02:57:40.617811 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.617818 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:40.617823 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:40.617882 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:40.643710 1848358 cri.go:89] found id: ""
	I1216 02:57:40.643725 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.643732 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:40.643737 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:40.643811 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:40.672711 1848358 cri.go:89] found id: ""
	I1216 02:57:40.672731 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.672738 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:40.672743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:40.672802 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:40.704590 1848358 cri.go:89] found id: ""
	I1216 02:57:40.704604 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.704611 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:40.704620 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:40.704630 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:40.769622 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:40.769642 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:40.786992 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:40.787010 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:40.853579 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:40.844241   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.845343   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847164   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847823   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.849606   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:40.844241   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.845343   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847164   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847823   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.849606   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:40.853590 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:40.853600 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:40.915814 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:40.915833 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:43.448229 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:43.458340 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:43.458399 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:43.481954 1848358 cri.go:89] found id: ""
	I1216 02:57:43.481967 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.481974 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:43.481979 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:43.482037 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:43.507588 1848358 cri.go:89] found id: ""
	I1216 02:57:43.507603 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.507610 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:43.507614 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:43.507684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:43.533164 1848358 cri.go:89] found id: ""
	I1216 02:57:43.533179 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.533188 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:43.533193 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:43.533255 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:43.558139 1848358 cri.go:89] found id: ""
	I1216 02:57:43.558152 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.558159 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:43.558164 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:43.558221 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:43.587218 1848358 cri.go:89] found id: ""
	I1216 02:57:43.587244 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.587251 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:43.587256 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:43.587315 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:43.613584 1848358 cri.go:89] found id: ""
	I1216 02:57:43.613598 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.613605 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:43.613610 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:43.613691 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:43.645887 1848358 cri.go:89] found id: ""
	I1216 02:57:43.645901 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.645908 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:43.645916 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:43.645928 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:43.662557 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:43.662574 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:43.745017 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:43.735622   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.736621   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738304   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738872   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.740427   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:43.735622   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.736621   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738304   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738872   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.740427   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:43.745029 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:43.745040 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:43.808792 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:43.808811 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:43.837682 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:43.837698 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:46.396229 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:46.406230 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:46.406302 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:46.429707 1848358 cri.go:89] found id: ""
	I1216 02:57:46.429721 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.429728 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:46.429733 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:46.429796 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:46.454076 1848358 cri.go:89] found id: ""
	I1216 02:57:46.454090 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.454097 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:46.454101 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:46.454159 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:46.479472 1848358 cri.go:89] found id: ""
	I1216 02:57:46.479486 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.479493 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:46.479498 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:46.479557 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:46.505579 1848358 cri.go:89] found id: ""
	I1216 02:57:46.505592 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.505599 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:46.505605 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:46.505665 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:46.530373 1848358 cri.go:89] found id: ""
	I1216 02:57:46.530387 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.530394 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:46.530399 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:46.530464 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:46.554723 1848358 cri.go:89] found id: ""
	I1216 02:57:46.554736 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.554743 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:46.554748 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:46.554808 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:46.579147 1848358 cri.go:89] found id: ""
	I1216 02:57:46.579164 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.579171 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:46.579179 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:46.579189 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:46.634449 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:46.634473 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:46.651968 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:46.651988 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:46.739219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:46.722068   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.723633   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.732527   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.733244   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.734892   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:46.722068   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.723633   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.732527   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.733244   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.734892   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:46.739239 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:46.739250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:46.812956 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:46.812976 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:49.345440 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:49.356029 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:49.356092 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:49.381514 1848358 cri.go:89] found id: ""
	I1216 02:57:49.381528 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.381535 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:49.381540 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:49.381608 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:49.411765 1848358 cri.go:89] found id: ""
	I1216 02:57:49.411779 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.411786 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:49.411791 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:49.411854 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:49.440610 1848358 cri.go:89] found id: ""
	I1216 02:57:49.440624 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.440631 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:49.440637 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:49.440705 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:49.470688 1848358 cri.go:89] found id: ""
	I1216 02:57:49.470702 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.470709 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:49.470714 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:49.470774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:49.497170 1848358 cri.go:89] found id: ""
	I1216 02:57:49.497184 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.497191 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:49.497196 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:49.497254 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:49.521925 1848358 cri.go:89] found id: ""
	I1216 02:57:49.521940 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.521947 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:49.521952 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:49.522011 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:49.546344 1848358 cri.go:89] found id: ""
	I1216 02:57:49.546358 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.546366 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:49.546374 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:49.546385 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:49.602407 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:49.602426 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:49.619246 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:49.619263 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:49.683476 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:49.674979   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.675736   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677328   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677797   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.679458   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:49.674979   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.675736   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677328   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677797   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.679458   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:49.683488 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:49.683499 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:49.752732 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:49.752753 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:52.289101 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:52.300210 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:52.300272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:52.327757 1848358 cri.go:89] found id: ""
	I1216 02:57:52.327772 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.327779 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:52.327784 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:52.327842 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:52.352750 1848358 cri.go:89] found id: ""
	I1216 02:57:52.352764 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.352771 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:52.352776 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:52.352834 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:52.377100 1848358 cri.go:89] found id: ""
	I1216 02:57:52.377114 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.377135 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:52.377140 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:52.377210 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:52.401376 1848358 cri.go:89] found id: ""
	I1216 02:57:52.401390 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.401397 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:52.401402 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:52.401462 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:52.428592 1848358 cri.go:89] found id: ""
	I1216 02:57:52.428606 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.428613 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:52.428618 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:52.428677 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:52.457192 1848358 cri.go:89] found id: ""
	I1216 02:57:52.457206 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.457213 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:52.457218 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:52.457276 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:52.481473 1848358 cri.go:89] found id: ""
	I1216 02:57:52.481494 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.481501 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:52.481509 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:52.481519 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:52.540087 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:52.540106 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:52.560374 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:52.560391 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:52.628219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:52.619222   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.619906   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.621689   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.622192   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.623773   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:52.619222   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.619906   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.621689   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.622192   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.623773   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:52.628231 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:52.628241 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:52.692110 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:52.692130 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:55.226607 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:55.236818 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:55.236879 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:55.265073 1848358 cri.go:89] found id: ""
	I1216 02:57:55.265087 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.265094 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:55.265099 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:55.265160 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:55.291262 1848358 cri.go:89] found id: ""
	I1216 02:57:55.291276 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.291284 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:55.291289 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:55.291357 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:55.320515 1848358 cri.go:89] found id: ""
	I1216 02:57:55.320539 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.320546 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:55.320551 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:55.320620 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:55.348402 1848358 cri.go:89] found id: ""
	I1216 02:57:55.348426 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.348433 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:55.348438 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:55.348500 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:55.373391 1848358 cri.go:89] found id: ""
	I1216 02:57:55.373405 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.373413 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:55.373418 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:55.373480 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:55.402098 1848358 cri.go:89] found id: ""
	I1216 02:57:55.402111 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.402118 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:55.402124 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:55.402183 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:55.427824 1848358 cri.go:89] found id: ""
	I1216 02:57:55.427838 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.427845 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:55.427853 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:55.427863 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:55.497187 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:55.497216 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:55.526960 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:55.526981 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:55.585085 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:55.585105 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:55.602223 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:55.602241 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:55.671427 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:55.662836   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.663796   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665445   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665748   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.667112   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:55.662836   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.663796   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665445   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665748   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.667112   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:58.171689 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:58.181822 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:58.181885 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:58.206129 1848358 cri.go:89] found id: ""
	I1216 02:57:58.206143 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.206150 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:58.206155 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:58.206214 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:58.230940 1848358 cri.go:89] found id: ""
	I1216 02:57:58.230954 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.230960 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:58.230966 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:58.231024 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:58.256698 1848358 cri.go:89] found id: ""
	I1216 02:57:58.256712 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.256720 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:58.256724 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:58.256788 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:58.281370 1848358 cri.go:89] found id: ""
	I1216 02:57:58.281385 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.281392 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:58.281396 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:58.281456 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:58.313032 1848358 cri.go:89] found id: ""
	I1216 02:57:58.313046 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.313054 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:58.313059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:58.313124 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:58.337968 1848358 cri.go:89] found id: ""
	I1216 02:57:58.337982 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.337989 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:58.337994 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:58.338052 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:58.367215 1848358 cri.go:89] found id: ""
	I1216 02:57:58.367231 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.367239 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:58.367247 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:58.367259 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:58.433078 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:58.423612   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.424320   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426139   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426759   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.428489   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:58.423612   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.424320   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426139   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426759   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.428489   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:58.433088 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:58.433099 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:58.496751 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:58.496771 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:58.528345 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:58.528362 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:58.585231 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:58.585249 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:01.103256 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:01.114505 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:01.114572 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:01.141817 1848358 cri.go:89] found id: ""
	I1216 02:58:01.141831 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.141838 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:01.141843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:01.141908 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:01.170638 1848358 cri.go:89] found id: ""
	I1216 02:58:01.170653 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.170660 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:01.170667 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:01.170733 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:01.197958 1848358 cri.go:89] found id: ""
	I1216 02:58:01.197973 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.197980 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:01.197986 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:01.198051 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:01.225715 1848358 cri.go:89] found id: ""
	I1216 02:58:01.225731 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.225738 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:01.225744 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:01.225803 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:01.256157 1848358 cri.go:89] found id: ""
	I1216 02:58:01.256171 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.256178 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:01.256184 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:01.256244 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:01.281610 1848358 cri.go:89] found id: ""
	I1216 02:58:01.281625 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.281633 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:01.281638 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:01.281702 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:01.306348 1848358 cri.go:89] found id: ""
	I1216 02:58:01.306363 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.306370 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:01.306377 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:01.306388 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:01.335207 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:01.335224 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:01.392222 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:01.392242 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:01.408874 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:01.408890 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:01.472601 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:01.464071   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.464670   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.466724   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.467464   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.468593   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:01.464071   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.464670   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.466724   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.467464   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.468593   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:01.472613 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:01.472626 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:04.035738 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:04.046578 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:04.046661 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:04.072441 1848358 cri.go:89] found id: ""
	I1216 02:58:04.072456 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.072463 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:04.072468 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:04.072531 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:04.103113 1848358 cri.go:89] found id: ""
	I1216 02:58:04.103128 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.103135 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:04.103139 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:04.103208 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:04.127981 1848358 cri.go:89] found id: ""
	I1216 02:58:04.127995 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.128002 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:04.128007 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:04.128067 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:04.153050 1848358 cri.go:89] found id: ""
	I1216 02:58:04.153065 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.153072 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:04.153077 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:04.153139 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:04.176840 1848358 cri.go:89] found id: ""
	I1216 02:58:04.176854 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.176879 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:04.176885 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:04.176954 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:04.205747 1848358 cri.go:89] found id: ""
	I1216 02:58:04.205771 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.205779 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:04.205784 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:04.205853 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:04.234453 1848358 cri.go:89] found id: ""
	I1216 02:58:04.234467 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.234474 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:04.234483 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:04.234505 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:04.294713 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:04.294732 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:04.312011 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:04.312029 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:04.378295 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:04.369406   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.370236   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372024   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372637   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.374434   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:04.369406   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.370236   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372024   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372637   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.374434   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:04.378314 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:04.378325 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:04.440962 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:04.440984 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:06.970088 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:06.983751 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:06.983819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:07.013657 1848358 cri.go:89] found id: ""
	I1216 02:58:07.013672 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.013679 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:07.013684 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:07.013752 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:07.038882 1848358 cri.go:89] found id: ""
	I1216 02:58:07.038896 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.038904 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:07.038909 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:07.038968 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:07.064215 1848358 cri.go:89] found id: ""
	I1216 02:58:07.064230 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.064237 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:07.064242 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:07.064304 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:07.088144 1848358 cri.go:89] found id: ""
	I1216 02:58:07.088158 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.088165 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:07.088170 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:07.088229 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:07.112044 1848358 cri.go:89] found id: ""
	I1216 02:58:07.112059 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.112066 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:07.112071 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:07.112137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:07.138570 1848358 cri.go:89] found id: ""
	I1216 02:58:07.138586 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.138593 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:07.138599 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:07.138658 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:07.166931 1848358 cri.go:89] found id: ""
	I1216 02:58:07.166945 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.166952 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:07.166959 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:07.166973 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:07.197292 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:07.197308 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:07.255003 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:07.255023 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:07.273531 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:07.273547 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:07.338842 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:07.330204   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.330976   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.332722   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.333328   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.334951   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:07.330204   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.330976   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.332722   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.333328   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.334951   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:07.338852 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:07.338863 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:09.902725 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:09.913150 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:09.913213 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:09.946614 1848358 cri.go:89] found id: ""
	I1216 02:58:09.946627 1848358 logs.go:282] 0 containers: []
	W1216 02:58:09.946634 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:09.946639 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:09.946703 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:09.975470 1848358 cri.go:89] found id: ""
	I1216 02:58:09.975484 1848358 logs.go:282] 0 containers: []
	W1216 02:58:09.975491 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:09.975496 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:09.975557 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:10.002745 1848358 cri.go:89] found id: ""
	I1216 02:58:10.002773 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.002782 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:10.002787 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:10.002866 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:10.035489 1848358 cri.go:89] found id: ""
	I1216 02:58:10.035504 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.035512 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:10.035517 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:10.035581 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:10.062019 1848358 cri.go:89] found id: ""
	I1216 02:58:10.062044 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.062052 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:10.062059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:10.062139 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:10.088952 1848358 cri.go:89] found id: ""
	I1216 02:58:10.088977 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.088986 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:10.088991 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:10.089061 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:10.115714 1848358 cri.go:89] found id: ""
	I1216 02:58:10.115736 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.115744 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:10.115752 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:10.115762 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:10.172504 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:10.172524 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:10.190804 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:10.190821 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:10.258662 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:10.249875   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.250514   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252114   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252638   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.254176   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:10.249875   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.250514   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252114   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252638   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.254176   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:10.258675 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:10.258686 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:10.321543 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:10.321562 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:12.849334 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:12.859284 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:12.859345 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:12.884624 1848358 cri.go:89] found id: ""
	I1216 02:58:12.884640 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.884648 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:12.884653 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:12.884722 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:12.908735 1848358 cri.go:89] found id: ""
	I1216 02:58:12.908749 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.908756 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:12.908761 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:12.908819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:12.944827 1848358 cri.go:89] found id: ""
	I1216 02:58:12.944841 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.944848 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:12.944854 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:12.944917 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:12.974281 1848358 cri.go:89] found id: ""
	I1216 02:58:12.974295 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.974302 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:12.974308 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:12.974367 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:13.008278 1848358 cri.go:89] found id: ""
	I1216 02:58:13.008294 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.008302 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:13.008307 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:13.008376 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:13.034272 1848358 cri.go:89] found id: ""
	I1216 02:58:13.034286 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.034294 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:13.034299 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:13.034361 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:13.064663 1848358 cri.go:89] found id: ""
	I1216 02:58:13.064688 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.064695 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:13.064703 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:13.064716 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:13.127826 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:13.127848 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:13.158482 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:13.158498 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:13.218053 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:13.218072 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:13.234830 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:13.234846 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:13.298317 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:13.289893   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.290910   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.291765   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293309   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293580   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:13.289893   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.290910   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.291765   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293309   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293580   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:15.798590 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:15.809144 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:15.809225 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:15.834683 1848358 cri.go:89] found id: ""
	I1216 02:58:15.834696 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.834704 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:15.834709 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:15.834774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:15.860001 1848358 cri.go:89] found id: ""
	I1216 02:58:15.860030 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.860038 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:15.860042 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:15.860113 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:15.884488 1848358 cri.go:89] found id: ""
	I1216 02:58:15.884503 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.884510 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:15.884515 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:15.884572 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:15.908030 1848358 cri.go:89] found id: ""
	I1216 02:58:15.908045 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.908051 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:15.908056 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:15.908116 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:15.932641 1848358 cri.go:89] found id: ""
	I1216 02:58:15.932654 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.932661 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:15.932666 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:15.932723 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:15.962741 1848358 cri.go:89] found id: ""
	I1216 02:58:15.962754 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.962772 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:15.962779 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:15.962836 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:15.990774 1848358 cri.go:89] found id: ""
	I1216 02:58:15.990788 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.990806 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:15.990829 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:15.990838 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:16.067729 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:16.067748 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:16.098615 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:16.098635 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:16.154944 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:16.154963 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:16.172510 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:16.172527 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:16.237380 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:16.229269   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.229868   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231379   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231950   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.233594   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:16.229269   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.229868   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231379   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231950   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.233594   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:18.738100 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:18.751636 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:18.751717 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:18.779608 1848358 cri.go:89] found id: ""
	I1216 02:58:18.779622 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.779629 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:18.779634 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:18.779693 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:18.805721 1848358 cri.go:89] found id: ""
	I1216 02:58:18.805735 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.805742 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:18.805747 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:18.805812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:18.831187 1848358 cri.go:89] found id: ""
	I1216 02:58:18.831203 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.831210 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:18.831215 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:18.831280 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:18.857343 1848358 cri.go:89] found id: ""
	I1216 02:58:18.857367 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.857375 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:18.857380 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:18.857448 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:18.882737 1848358 cri.go:89] found id: ""
	I1216 02:58:18.882751 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.882758 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:18.882765 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:18.882834 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:18.907486 1848358 cri.go:89] found id: ""
	I1216 02:58:18.907500 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.907508 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:18.907513 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:18.907573 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:18.939361 1848358 cri.go:89] found id: ""
	I1216 02:58:18.939375 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.939382 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:18.939390 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:18.939401 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:19.019241 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:19.010485   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.010907   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.012525   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.013150   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.014918   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:19.010485   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.010907   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.012525   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.013150   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.014918   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:19.019251 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:19.019262 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:19.081820 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:19.081842 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:19.110025 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:19.110042 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:19.166216 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:19.166236 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:21.684597 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:21.694910 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:21.694974 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:21.719581 1848358 cri.go:89] found id: ""
	I1216 02:58:21.719595 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.719602 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:21.719607 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:21.719670 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:21.745661 1848358 cri.go:89] found id: ""
	I1216 02:58:21.745675 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.745682 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:21.745688 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:21.745745 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:21.770329 1848358 cri.go:89] found id: ""
	I1216 02:58:21.770342 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.770349 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:21.770354 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:21.770425 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:21.795402 1848358 cri.go:89] found id: ""
	I1216 02:58:21.795416 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.795423 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:21.795434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:21.795492 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:21.821959 1848358 cri.go:89] found id: ""
	I1216 02:58:21.821972 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.821979 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:21.821984 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:21.822043 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:21.845121 1848358 cri.go:89] found id: ""
	I1216 02:58:21.845135 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.845142 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:21.845148 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:21.845209 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:21.868958 1848358 cri.go:89] found id: ""
	I1216 02:58:21.868972 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.868979 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:21.868987 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:21.868997 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:21.932460 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:21.924049   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.924916   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926515   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926825   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.928346   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:21.924049   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.924916   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926515   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926825   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.928346   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:21.932490 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:21.932502 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:22.006384 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:22.006415 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:22.040639 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:22.040655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:22.097981 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:22.098000 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:24.615636 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:24.626423 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:24.626486 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:24.650890 1848358 cri.go:89] found id: ""
	I1216 02:58:24.650904 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.650911 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:24.650916 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:24.650984 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:24.676132 1848358 cri.go:89] found id: ""
	I1216 02:58:24.676146 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.676153 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:24.676158 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:24.676219 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:24.705732 1848358 cri.go:89] found id: ""
	I1216 02:58:24.705746 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.705753 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:24.705758 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:24.705820 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:24.729899 1848358 cri.go:89] found id: ""
	I1216 02:58:24.729914 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.729922 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:24.729927 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:24.729988 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:24.760724 1848358 cri.go:89] found id: ""
	I1216 02:58:24.760744 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.760752 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:24.760756 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:24.760821 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:24.789128 1848358 cri.go:89] found id: ""
	I1216 02:58:24.789144 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.789151 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:24.789157 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:24.789221 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:24.814525 1848358 cri.go:89] found id: ""
	I1216 02:58:24.814539 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.814548 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:24.814555 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:24.814567 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:24.845234 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:24.845251 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:24.904816 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:24.904835 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:24.922721 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:24.922744 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:25.017286 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:25.006539   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.007431   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.009403   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.010041   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.013452   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:25.006539   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.007431   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.009403   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.010041   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.013452   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:25.017298 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:25.017309 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:27.580148 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:27.590499 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:27.590563 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:27.614749 1848358 cri.go:89] found id: ""
	I1216 02:58:27.614764 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.614771 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:27.614776 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:27.614835 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:27.638735 1848358 cri.go:89] found id: ""
	I1216 02:58:27.638749 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.638756 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:27.638762 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:27.638821 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:27.665480 1848358 cri.go:89] found id: ""
	I1216 02:58:27.665495 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.665503 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:27.665508 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:27.665565 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:27.695981 1848358 cri.go:89] found id: ""
	I1216 02:58:27.695996 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.696004 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:27.696009 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:27.696088 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:27.720368 1848358 cri.go:89] found id: ""
	I1216 02:58:27.720390 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.720397 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:27.720403 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:27.720469 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:27.746357 1848358 cri.go:89] found id: ""
	I1216 02:58:27.746371 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.746377 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:27.746383 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:27.746441 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:27.770684 1848358 cri.go:89] found id: ""
	I1216 02:58:27.770708 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.770716 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:27.770724 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:27.770734 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:27.836245 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:27.836265 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:27.865946 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:27.865964 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:27.924653 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:27.924675 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:27.945999 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:27.946015 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:28.027275 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:28.018924   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.019507   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021144   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021655   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.023260   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:28.018924   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.019507   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021144   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021655   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.023260   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:30.527490 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:30.537746 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:30.537811 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:30.562783 1848358 cri.go:89] found id: ""
	I1216 02:58:30.562797 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.562805 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:30.562810 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:30.562882 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:30.587495 1848358 cri.go:89] found id: ""
	I1216 02:58:30.587509 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.587515 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:30.587521 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:30.587583 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:30.611375 1848358 cri.go:89] found id: ""
	I1216 02:58:30.611392 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.611400 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:30.611406 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:30.611472 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:30.635442 1848358 cri.go:89] found id: ""
	I1216 02:58:30.635457 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.635464 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:30.635469 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:30.635527 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:30.659725 1848358 cri.go:89] found id: ""
	I1216 02:58:30.659745 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.659752 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:30.659757 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:30.659819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:30.683639 1848358 cri.go:89] found id: ""
	I1216 02:58:30.683654 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.683661 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:30.683666 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:30.683725 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:30.709231 1848358 cri.go:89] found id: ""
	I1216 02:58:30.709246 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.709252 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:30.709260 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:30.709271 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:30.765116 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:30.765136 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:30.782213 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:30.782230 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:30.843173 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:30.835436   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.836096   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837188   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837822   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.839482   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:30.835436   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.836096   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837188   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837822   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.839482   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:30.843184 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:30.843195 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:30.905457 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:30.905477 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:33.448949 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:33.458942 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:33.459006 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:33.494559 1848358 cri.go:89] found id: ""
	I1216 02:58:33.494573 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.494582 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:33.494602 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:33.494672 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:33.521008 1848358 cri.go:89] found id: ""
	I1216 02:58:33.521028 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.521036 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:33.521041 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:33.521103 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:33.545598 1848358 cri.go:89] found id: ""
	I1216 02:58:33.545613 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.545620 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:33.545625 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:33.545684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:33.573194 1848358 cri.go:89] found id: ""
	I1216 02:58:33.573207 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.573214 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:33.573219 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:33.573284 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:33.597747 1848358 cri.go:89] found id: ""
	I1216 02:58:33.597761 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.597784 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:33.597789 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:33.597859 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:33.621788 1848358 cri.go:89] found id: ""
	I1216 02:58:33.621803 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.621810 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:33.621815 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:33.621892 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:33.646528 1848358 cri.go:89] found id: ""
	I1216 02:58:33.646543 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.646550 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:33.646557 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:33.646567 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:33.708165 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:33.708187 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:33.736001 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:33.736018 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:33.791763 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:33.791786 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:33.808896 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:33.808912 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:33.876753 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:33.868694   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.869434   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871119   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871558   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.873040   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:33.868694   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.869434   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871119   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871558   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.873040   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:36.376982 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:36.386962 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:36.387033 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:36.410927 1848358 cri.go:89] found id: ""
	I1216 02:58:36.410941 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.410948 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:36.410954 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:36.411013 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:36.436158 1848358 cri.go:89] found id: ""
	I1216 02:58:36.436171 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.436179 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:36.436189 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:36.436260 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:36.460716 1848358 cri.go:89] found id: ""
	I1216 02:58:36.460730 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.460737 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:36.460743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:36.460815 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:36.485244 1848358 cri.go:89] found id: ""
	I1216 02:58:36.485258 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.485266 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:36.485272 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:36.485335 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:36.509347 1848358 cri.go:89] found id: ""
	I1216 02:58:36.509361 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.509368 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:36.509374 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:36.509434 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:36.534352 1848358 cri.go:89] found id: ""
	I1216 02:58:36.534367 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.534374 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:36.534419 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:36.534481 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:36.560075 1848358 cri.go:89] found id: ""
	I1216 02:58:36.560090 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.560097 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:36.560105 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:36.560116 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:36.618652 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:36.618670 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:36.635627 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:36.635643 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:36.704527 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:36.695687   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.696277   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698043   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698600   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.700190   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:36.695687   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.696277   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698043   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698600   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.700190   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:36.704537 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:36.704550 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:36.767179 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:36.767199 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:39.295686 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:39.305848 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:39.305909 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:39.329771 1848358 cri.go:89] found id: ""
	I1216 02:58:39.329785 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.329792 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:39.329797 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:39.329857 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:39.354814 1848358 cri.go:89] found id: ""
	I1216 02:58:39.354829 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.354836 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:39.354841 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:39.354900 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:39.380095 1848358 cri.go:89] found id: ""
	I1216 02:58:39.380110 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.380117 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:39.380122 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:39.380182 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:39.404438 1848358 cri.go:89] found id: ""
	I1216 02:58:39.404453 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.404460 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:39.404465 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:39.404526 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:39.432615 1848358 cri.go:89] found id: ""
	I1216 02:58:39.432630 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.432636 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:39.432644 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:39.432709 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:39.456879 1848358 cri.go:89] found id: ""
	I1216 02:58:39.456893 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.456900 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:39.456905 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:39.456966 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:39.481400 1848358 cri.go:89] found id: ""
	I1216 02:58:39.481415 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.481421 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:39.481430 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:39.481441 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:39.540413 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:39.540433 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:39.558600 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:39.558618 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:39.623191 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:39.614791   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.615619   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617365   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617686   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.619229   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:39.614791   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.615619   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617365   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617686   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.619229   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:39.623201 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:39.623212 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:39.685663 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:39.685683 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:42.212532 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:42.242820 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:42.242893 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:42.277407 1848358 cri.go:89] found id: ""
	I1216 02:58:42.277427 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.277435 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:42.277441 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:42.277513 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:42.313862 1848358 cri.go:89] found id: ""
	I1216 02:58:42.313877 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.313893 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:42.313898 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:42.313963 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:42.345979 1848358 cri.go:89] found id: ""
	I1216 02:58:42.345995 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.346003 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:42.346009 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:42.346075 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:42.372530 1848358 cri.go:89] found id: ""
	I1216 02:58:42.372545 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.372552 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:42.372558 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:42.372622 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:42.400807 1848358 cri.go:89] found id: ""
	I1216 02:58:42.400821 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.400829 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:42.400834 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:42.400901 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:42.426053 1848358 cri.go:89] found id: ""
	I1216 02:58:42.426067 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.426074 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:42.426079 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:42.426137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:42.453460 1848358 cri.go:89] found id: ""
	I1216 02:58:42.453475 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.453482 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:42.453490 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:42.453500 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:42.509219 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:42.509237 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:42.526995 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:42.527011 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:42.589697 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:42.581361   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.581873   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.583507   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.584135   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.585772   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:42.581361   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.581873   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.583507   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.584135   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.585772   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:42.589706 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:42.589723 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:42.655306 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:42.655326 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:45.183328 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:45.217035 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:45.217117 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:45.257225 1848358 cri.go:89] found id: ""
	I1216 02:58:45.257247 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.257258 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:45.257264 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:45.257334 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:45.304389 1848358 cri.go:89] found id: ""
	I1216 02:58:45.304407 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.304416 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:45.304423 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:45.304509 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:45.334339 1848358 cri.go:89] found id: ""
	I1216 02:58:45.334354 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.334362 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:45.334367 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:45.334435 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:45.360176 1848358 cri.go:89] found id: ""
	I1216 02:58:45.360190 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.360198 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:45.360203 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:45.360263 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:45.384648 1848358 cri.go:89] found id: ""
	I1216 02:58:45.384663 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.384669 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:45.384678 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:45.384738 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:45.411115 1848358 cri.go:89] found id: ""
	I1216 02:58:45.411131 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.411138 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:45.411144 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:45.411218 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:45.437746 1848358 cri.go:89] found id: ""
	I1216 02:58:45.437761 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.437768 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:45.437776 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:45.437797 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:45.500791 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:45.500811 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:45.530882 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:45.530899 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:45.588591 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:45.588609 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:45.605872 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:45.605900 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:45.673187 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:45.663658   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.664990   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.665900   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.667592   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.668146   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:45.663658   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.664990   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.665900   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.667592   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.668146   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:48.173453 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:48.186360 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:48.186425 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:48.216541 1848358 cri.go:89] found id: ""
	I1216 02:58:48.216556 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.216563 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:48.216568 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:48.216633 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:48.243385 1848358 cri.go:89] found id: ""
	I1216 02:58:48.243399 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.243407 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:48.243412 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:48.243473 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:48.268738 1848358 cri.go:89] found id: ""
	I1216 02:58:48.268752 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.268759 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:48.268764 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:48.268825 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:48.293634 1848358 cri.go:89] found id: ""
	I1216 02:58:48.293649 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.293657 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:48.293662 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:48.293722 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:48.320780 1848358 cri.go:89] found id: ""
	I1216 02:58:48.320796 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.320805 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:48.320810 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:48.320872 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:48.344687 1848358 cri.go:89] found id: ""
	I1216 02:58:48.344701 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.344710 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:48.344715 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:48.344775 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:48.368368 1848358 cri.go:89] found id: ""
	I1216 02:58:48.368383 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.368390 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:48.368398 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:48.368407 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:48.424495 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:48.424515 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:48.441644 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:48.441660 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:48.506701 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:48.498238   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.498920   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.500649   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.501232   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.502941   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:48.498238   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.498920   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.500649   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.501232   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.502941   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:48.506710 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:48.506721 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:48.569962 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:48.569984 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:51.098190 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:51.108977 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:51.109048 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:51.134223 1848358 cri.go:89] found id: ""
	I1216 02:58:51.134237 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.134244 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:51.134249 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:51.134310 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:51.161239 1848358 cri.go:89] found id: ""
	I1216 02:58:51.161253 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.161261 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:51.161266 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:51.161326 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:51.202211 1848358 cri.go:89] found id: ""
	I1216 02:58:51.202225 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.202232 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:51.202237 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:51.202296 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:51.233630 1848358 cri.go:89] found id: ""
	I1216 02:58:51.233651 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.233658 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:51.233663 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:51.233728 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:51.270204 1848358 cri.go:89] found id: ""
	I1216 02:58:51.270219 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.270233 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:51.270238 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:51.270301 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:51.298689 1848358 cri.go:89] found id: ""
	I1216 02:58:51.298705 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.298716 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:51.298722 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:51.298799 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:51.323107 1848358 cri.go:89] found id: ""
	I1216 02:58:51.323126 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.323133 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:51.323140 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:51.323150 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:51.386665 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:51.386693 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:51.404372 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:51.404391 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:51.469512 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:51.460793   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.461262   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.462922   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.463490   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.465130   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:51.460793   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.461262   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.462922   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.463490   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.465130   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:51.469532 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:51.469554 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:51.535704 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:51.535725 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:54.065223 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:54.077244 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:54.077307 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:54.106090 1848358 cri.go:89] found id: ""
	I1216 02:58:54.106103 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.106110 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:54.106115 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:54.106177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:54.131805 1848358 cri.go:89] found id: ""
	I1216 02:58:54.131819 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.131833 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:54.131838 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:54.131899 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:54.156816 1848358 cri.go:89] found id: ""
	I1216 02:58:54.156829 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.156837 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:54.156842 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:54.156901 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:54.181654 1848358 cri.go:89] found id: ""
	I1216 02:58:54.181669 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.181693 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:54.181698 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:54.181765 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:54.219797 1848358 cri.go:89] found id: ""
	I1216 02:58:54.219812 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.219819 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:54.219833 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:54.219910 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:54.251176 1848358 cri.go:89] found id: ""
	I1216 02:58:54.251190 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.251197 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:54.251202 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:54.251265 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:54.275716 1848358 cri.go:89] found id: ""
	I1216 02:58:54.275731 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.275739 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:54.275747 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:54.275758 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:54.338395 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:54.330425   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.330860   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332372   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332678   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.334183   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:54.330425   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.330860   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332372   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332678   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.334183   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:54.338408 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:54.338429 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:54.401729 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:54.401749 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:54.429361 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:54.429376 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:54.489525 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:54.489545 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:57.006993 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:57.017732 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:57.017792 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:57.042221 1848358 cri.go:89] found id: ""
	I1216 02:58:57.042235 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.042242 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:57.042248 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:57.042316 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:57.069364 1848358 cri.go:89] found id: ""
	I1216 02:58:57.069378 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.069385 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:57.069390 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:57.069450 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:57.093795 1848358 cri.go:89] found id: ""
	I1216 02:58:57.093808 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.093815 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:57.093820 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:57.093881 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:57.118148 1848358 cri.go:89] found id: ""
	I1216 02:58:57.118161 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.118168 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:57.118177 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:57.118235 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:57.142161 1848358 cri.go:89] found id: ""
	I1216 02:58:57.142175 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.142182 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:57.142187 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:57.142247 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:57.169165 1848358 cri.go:89] found id: ""
	I1216 02:58:57.169178 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.169186 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:57.169191 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:57.169256 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:57.200840 1848358 cri.go:89] found id: ""
	I1216 02:58:57.200855 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.200862 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:57.200870 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:57.200881 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:57.260426 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:57.260444 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:57.285637 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:57.285654 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:57.350704 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:57.342168   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.342999   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.344857   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.345195   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.346755   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:57.342168   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.342999   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.344857   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.345195   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.346755   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:57.350714 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:57.350727 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:57.413587 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:57.413606 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:59.944007 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:59.954621 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:59.954685 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:59.979450 1848358 cri.go:89] found id: ""
	I1216 02:58:59.979466 1848358 logs.go:282] 0 containers: []
	W1216 02:58:59.979474 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:59.979479 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:59.979543 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:00.040218 1848358 cri.go:89] found id: ""
	I1216 02:59:00.040237 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.040245 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:00.040251 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:00.040325 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:00.225643 1848358 cri.go:89] found id: ""
	I1216 02:59:00.225659 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.225666 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:00.225679 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:00.225749 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:00.292916 1848358 cri.go:89] found id: ""
	I1216 02:59:00.292933 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.292941 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:00.292947 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:00.293016 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:00.327359 1848358 cri.go:89] found id: ""
	I1216 02:59:00.327375 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.327383 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:00.327389 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:00.327463 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:00.362091 1848358 cri.go:89] found id: ""
	I1216 02:59:00.362107 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.362116 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:00.362121 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:00.362205 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:00.392615 1848358 cri.go:89] found id: ""
	I1216 02:59:00.392648 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.392656 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:00.392665 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:00.392677 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:00.411628 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:00.411646 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:00.485425 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:00.476589   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.477159   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.478750   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.479401   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.480376   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:00.476589   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.477159   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.478750   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.479401   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.480376   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:00.485435 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:00.485446 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:00.548759 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:00.548779 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:00.579219 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:00.579235 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:03.138643 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:03.151350 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:03.151414 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:03.177456 1848358 cri.go:89] found id: ""
	I1216 02:59:03.177480 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.177489 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:03.177494 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:03.177576 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:03.209025 1848358 cri.go:89] found id: ""
	I1216 02:59:03.209054 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.209063 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:03.209068 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:03.209142 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:03.245557 1848358 cri.go:89] found id: ""
	I1216 02:59:03.245571 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.245578 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:03.245583 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:03.245651 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:03.273887 1848358 cri.go:89] found id: ""
	I1216 02:59:03.273902 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.273909 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:03.273914 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:03.273980 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:03.299955 1848358 cri.go:89] found id: ""
	I1216 02:59:03.299970 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.299977 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:03.299987 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:03.300050 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:03.325891 1848358 cri.go:89] found id: ""
	I1216 02:59:03.325906 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.325913 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:03.325918 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:03.325977 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:03.353059 1848358 cri.go:89] found id: ""
	I1216 02:59:03.353073 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.353080 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:03.353088 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:03.353101 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:03.409018 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:03.409040 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:03.427124 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:03.427141 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:03.498219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:03.489642   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.490076   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.491637   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.492014   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.493527   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:03.489642   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.490076   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.491637   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.492014   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.493527   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:03.498236 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:03.498250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:03.563005 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:03.563031 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:06.091678 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:06.102426 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:06.102489 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:06.127426 1848358 cri.go:89] found id: ""
	I1216 02:59:06.127439 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.127446 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:06.127452 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:06.127511 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:06.152255 1848358 cri.go:89] found id: ""
	I1216 02:59:06.152270 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.152277 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:06.152282 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:06.152344 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:06.181806 1848358 cri.go:89] found id: ""
	I1216 02:59:06.181832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.181840 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:06.181846 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:06.181909 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:06.211543 1848358 cri.go:89] found id: ""
	I1216 02:59:06.211558 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.211565 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:06.211576 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:06.211638 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:06.239433 1848358 cri.go:89] found id: ""
	I1216 02:59:06.239448 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.239454 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:06.239460 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:06.239521 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:06.265180 1848358 cri.go:89] found id: ""
	I1216 02:59:06.265199 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.265206 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:06.265212 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:06.265273 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:06.288594 1848358 cri.go:89] found id: ""
	I1216 02:59:06.288608 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.288615 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:06.288622 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:06.288633 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:06.347416 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:06.347440 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:06.365120 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:06.365137 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:06.429753 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:06.422151   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.422683   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424172   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424494   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.425949   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:06.422151   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.422683   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424172   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424494   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.425949   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:06.429762 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:06.429772 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:06.491187 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:06.491205 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:09.021976 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:09.032138 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:09.032199 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:09.056495 1848358 cri.go:89] found id: ""
	I1216 02:59:09.056509 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.056517 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:09.056522 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:09.056579 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:09.085249 1848358 cri.go:89] found id: ""
	I1216 02:59:09.085263 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.085269 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:09.085275 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:09.085336 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:09.109270 1848358 cri.go:89] found id: ""
	I1216 02:59:09.109284 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.109291 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:09.109296 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:09.109365 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:09.134217 1848358 cri.go:89] found id: ""
	I1216 02:59:09.134231 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.134238 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:09.134243 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:09.134305 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:09.158656 1848358 cri.go:89] found id: ""
	I1216 02:59:09.158670 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.158677 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:09.158682 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:09.158749 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:09.190922 1848358 cri.go:89] found id: ""
	I1216 02:59:09.190937 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.190944 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:09.190949 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:09.191020 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:09.231605 1848358 cri.go:89] found id: ""
	I1216 02:59:09.231619 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.231633 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:09.231642 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:09.231652 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:09.293613 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:09.293633 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:09.310949 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:09.310966 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:09.378806 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:09.369691   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.370608   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372360   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372849   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.374328   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:09.369691   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.370608   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372360   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372849   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.374328   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:09.378816 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:09.378827 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:09.440510 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:09.440528 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:11.972007 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:11.982340 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:11.982402 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:12.014868 1848358 cri.go:89] found id: ""
	I1216 02:59:12.014883 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.014890 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:12.014895 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:12.014969 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:12.040987 1848358 cri.go:89] found id: ""
	I1216 02:59:12.041002 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.041008 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:12.041013 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:12.041090 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:12.065526 1848358 cri.go:89] found id: ""
	I1216 02:59:12.065540 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.065561 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:12.065566 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:12.065635 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:12.093806 1848358 cri.go:89] found id: ""
	I1216 02:59:12.093833 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.093841 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:12.093849 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:12.093921 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:12.121567 1848358 cri.go:89] found id: ""
	I1216 02:59:12.121595 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.121602 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:12.121607 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:12.121677 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:12.144869 1848358 cri.go:89] found id: ""
	I1216 02:59:12.144883 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.144890 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:12.144895 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:12.144955 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:12.168723 1848358 cri.go:89] found id: ""
	I1216 02:59:12.168737 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.168744 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:12.168752 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:12.168769 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:12.185531 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:12.185547 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:12.264487 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:12.255585   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.256344   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258045   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258783   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.260488   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:12.255585   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.256344   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258045   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258783   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.260488   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:12.264497 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:12.264508 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:12.326049 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:12.326068 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:12.353200 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:12.353216 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:14.910970 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:14.924577 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:14.924643 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:14.953399 1848358 cri.go:89] found id: ""
	I1216 02:59:14.953413 1848358 logs.go:282] 0 containers: []
	W1216 02:59:14.953420 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:14.953432 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:14.953495 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:14.978792 1848358 cri.go:89] found id: ""
	I1216 02:59:14.978806 1848358 logs.go:282] 0 containers: []
	W1216 02:59:14.978815 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:14.978821 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:14.978880 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:15.008511 1848358 cri.go:89] found id: ""
	I1216 02:59:15.008528 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.008536 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:15.008542 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:15.008624 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:15.053197 1848358 cri.go:89] found id: ""
	I1216 02:59:15.053213 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.053220 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:15.053226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:15.053293 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:15.082542 1848358 cri.go:89] found id: ""
	I1216 02:59:15.082557 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.082564 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:15.082570 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:15.082634 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:15.109527 1848358 cri.go:89] found id: ""
	I1216 02:59:15.109542 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.109550 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:15.109556 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:15.109634 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:15.137809 1848358 cri.go:89] found id: ""
	I1216 02:59:15.137823 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.137830 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:15.137838 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:15.137849 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:15.211501 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:15.202592   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.203475   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205236   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205549   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.207125   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:15.202592   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.203475   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205236   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205549   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.207125   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:15.211511 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:15.211523 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:15.285555 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:15.285576 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:15.314442 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:15.314458 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:15.370796 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:15.370818 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:17.889239 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:17.899171 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:17.899236 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:17.924099 1848358 cri.go:89] found id: ""
	I1216 02:59:17.924113 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.924121 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:17.924126 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:17.924187 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:17.950817 1848358 cri.go:89] found id: ""
	I1216 02:59:17.950832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.950838 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:17.950843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:17.950903 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:17.976899 1848358 cri.go:89] found id: ""
	I1216 02:59:17.976913 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.976920 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:17.976925 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:17.976987 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:18.003139 1848358 cri.go:89] found id: ""
	I1216 02:59:18.003156 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.003164 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:18.003169 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:18.003244 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:18.032644 1848358 cri.go:89] found id: ""
	I1216 02:59:18.032659 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.032666 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:18.032671 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:18.032740 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:18.058880 1848358 cri.go:89] found id: ""
	I1216 02:59:18.058895 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.058906 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:18.058915 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:18.058988 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:18.084275 1848358 cri.go:89] found id: ""
	I1216 02:59:18.084290 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.084298 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:18.084306 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:18.084318 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:18.146637 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:18.146665 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:18.164002 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:18.164022 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:18.241086 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:18.231204   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.232053   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.233635   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.234184   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.235838   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:18.231204   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.232053   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.233635   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.234184   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.235838   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:18.241097 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:18.241110 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:18.306777 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:18.306796 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:20.840754 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:20.850885 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:20.850942 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:20.880985 1848358 cri.go:89] found id: ""
	I1216 02:59:20.881000 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.881007 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:20.881012 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:20.881071 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:20.904789 1848358 cri.go:89] found id: ""
	I1216 02:59:20.904803 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.904810 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:20.904815 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:20.904873 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:20.929350 1848358 cri.go:89] found id: ""
	I1216 02:59:20.929362 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.929370 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:20.929381 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:20.929438 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:20.953473 1848358 cri.go:89] found id: ""
	I1216 02:59:20.953487 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.953493 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:20.953499 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:20.953558 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:20.977718 1848358 cri.go:89] found id: ""
	I1216 02:59:20.977731 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.977738 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:20.977743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:20.977800 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:21.001640 1848358 cri.go:89] found id: ""
	I1216 02:59:21.001657 1848358 logs.go:282] 0 containers: []
	W1216 02:59:21.001664 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:21.001669 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:21.001752 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:21.030827 1848358 cri.go:89] found id: ""
	I1216 02:59:21.030840 1848358 logs.go:282] 0 containers: []
	W1216 02:59:21.030847 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:21.030855 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:21.030865 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:21.086683 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:21.086703 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:21.106615 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:21.106638 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:21.196393 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:21.180051   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.181461   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.182376   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186322   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186918   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:21.180051   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.181461   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.182376   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186322   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186918   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:21.196410 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:21.196420 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:21.259711 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:21.259730 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:23.788985 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:23.801081 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:23.801153 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:23.831711 1848358 cri.go:89] found id: ""
	I1216 02:59:23.831732 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.831740 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:23.831745 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:23.831812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:23.857025 1848358 cri.go:89] found id: ""
	I1216 02:59:23.857040 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.857047 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:23.857052 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:23.857115 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:23.885653 1848358 cri.go:89] found id: ""
	I1216 02:59:23.885667 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.885674 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:23.885679 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:23.885739 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:23.912974 1848358 cri.go:89] found id: ""
	I1216 02:59:23.912987 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.912996 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:23.913001 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:23.913062 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:23.936892 1848358 cri.go:89] found id: ""
	I1216 02:59:23.936906 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.936914 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:23.936919 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:23.936978 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:23.959826 1848358 cri.go:89] found id: ""
	I1216 02:59:23.959841 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.959848 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:23.959853 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:23.959912 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:23.987747 1848358 cri.go:89] found id: ""
	I1216 02:59:23.987760 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.987767 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:23.987775 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:23.987785 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:24.043435 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:24.043453 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:24.060830 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:24.060848 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:24.129870 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:24.121071   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.121882   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.123511   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.124023   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.125643   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:24.121071   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.121882   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.123511   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.124023   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.125643   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:24.129882 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:24.129893 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:24.192043 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:24.192064 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:26.722933 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:26.733462 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:26.733528 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:26.757094 1848358 cri.go:89] found id: ""
	I1216 02:59:26.757109 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.757115 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:26.757121 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:26.757190 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:26.785265 1848358 cri.go:89] found id: ""
	I1216 02:59:26.785279 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.785286 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:26.785291 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:26.785348 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:26.809734 1848358 cri.go:89] found id: ""
	I1216 02:59:26.809748 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.809755 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:26.809760 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:26.809823 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:26.833900 1848358 cri.go:89] found id: ""
	I1216 02:59:26.833914 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.833921 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:26.833926 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:26.833983 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:26.858364 1848358 cri.go:89] found id: ""
	I1216 02:59:26.858381 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.858388 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:26.858392 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:26.858476 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:26.884221 1848358 cri.go:89] found id: ""
	I1216 02:59:26.884235 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.884242 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:26.884247 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:26.884306 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:26.909747 1848358 cri.go:89] found id: ""
	I1216 02:59:26.909761 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.909768 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:26.909776 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:26.909785 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:26.965217 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:26.965237 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:26.982549 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:26.982573 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:27.049273 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:27.041323   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.041704   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043346   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043928   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.045499   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:27.041323   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.041704   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043346   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043928   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.045499   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:27.049282 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:27.049293 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:27.112656 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:27.112677 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:29.642709 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:29.652965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:29.653051 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:29.681994 1848358 cri.go:89] found id: ""
	I1216 02:59:29.682008 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.682030 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:29.682037 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:29.682106 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:29.710335 1848358 cri.go:89] found id: ""
	I1216 02:59:29.710350 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.710357 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:29.710363 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:29.710454 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:29.737846 1848358 cri.go:89] found id: ""
	I1216 02:59:29.737861 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.737868 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:29.737873 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:29.737943 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:29.763917 1848358 cri.go:89] found id: ""
	I1216 02:59:29.763931 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.763938 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:29.763944 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:29.764015 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:29.788324 1848358 cri.go:89] found id: ""
	I1216 02:59:29.788338 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.788345 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:29.788351 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:29.788409 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:29.812477 1848358 cri.go:89] found id: ""
	I1216 02:59:29.812490 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.812497 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:29.812502 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:29.812561 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:29.840464 1848358 cri.go:89] found id: ""
	I1216 02:59:29.840479 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.840486 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:29.840495 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:29.840509 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:29.905495 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:29.897197   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.898006   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899547   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899856   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.901335   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:29.897197   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.898006   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899547   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899856   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.901335   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:29.905505 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:29.905515 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:29.967090 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:29.967110 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:29.999894 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:29.999910 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:30.095570 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:30.095596 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:32.614024 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:32.624941 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:32.625007 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:32.649578 1848358 cri.go:89] found id: ""
	I1216 02:59:32.649593 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.649601 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:32.649606 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:32.649665 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:32.678365 1848358 cri.go:89] found id: ""
	I1216 02:59:32.678379 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.678386 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:32.678391 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:32.678450 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:32.703205 1848358 cri.go:89] found id: ""
	I1216 02:59:32.703219 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.703226 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:32.703231 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:32.703295 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:32.727484 1848358 cri.go:89] found id: ""
	I1216 02:59:32.727499 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.727506 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:32.727511 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:32.727568 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:32.753092 1848358 cri.go:89] found id: ""
	I1216 02:59:32.753106 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.753113 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:32.753119 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:32.753178 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:32.781551 1848358 cri.go:89] found id: ""
	I1216 02:59:32.781565 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.781572 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:32.781577 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:32.781636 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:32.807153 1848358 cri.go:89] found id: ""
	I1216 02:59:32.807168 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.807176 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:32.807184 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:32.807199 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:32.863763 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:32.863782 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:32.880478 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:32.880495 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:32.950082 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:32.941362   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.942084   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.943575   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.944217   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.945946   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:32.941362   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.942084   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.943575   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.944217   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.945946   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:32.950092 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:32.950102 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:33.016099 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:33.016121 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:35.546066 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:35.557055 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:35.557115 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:35.582927 1848358 cri.go:89] found id: ""
	I1216 02:59:35.582951 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.582960 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:35.582965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:35.583033 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:35.608110 1848358 cri.go:89] found id: ""
	I1216 02:59:35.608124 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.608131 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:35.608141 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:35.608203 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:35.632465 1848358 cri.go:89] found id: ""
	I1216 02:59:35.632479 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.632485 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:35.632490 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:35.632555 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:35.661165 1848358 cri.go:89] found id: ""
	I1216 02:59:35.661179 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.661198 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:35.661204 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:35.661272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:35.686050 1848358 cri.go:89] found id: ""
	I1216 02:59:35.686064 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.686081 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:35.686087 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:35.686156 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:35.711189 1848358 cri.go:89] found id: ""
	I1216 02:59:35.711203 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.711210 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:35.711215 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:35.711276 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:35.735024 1848358 cri.go:89] found id: ""
	I1216 02:59:35.735072 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.735080 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:35.735089 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:35.735099 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:35.790017 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:35.790036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:35.807195 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:35.807212 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:35.870014 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:35.862369   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.862875   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864373   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864766   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.866205   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:35.862369   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.862875   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864373   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864766   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.866205   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:35.870024 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:35.870036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:35.933113 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:35.933134 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:38.460684 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:38.471131 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:38.471193 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:38.508160 1848358 cri.go:89] found id: ""
	I1216 02:59:38.508175 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.508183 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:38.508188 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:38.508257 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:38.540297 1848358 cri.go:89] found id: ""
	I1216 02:59:38.540312 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.540320 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:38.540324 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:38.540388 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:38.566230 1848358 cri.go:89] found id: ""
	I1216 02:59:38.566244 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.566252 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:38.566257 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:38.566321 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:38.591818 1848358 cri.go:89] found id: ""
	I1216 02:59:38.591832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.591839 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:38.591844 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:38.591911 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:38.618603 1848358 cri.go:89] found id: ""
	I1216 02:59:38.618617 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.618624 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:38.618629 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:38.618689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:38.643310 1848358 cri.go:89] found id: ""
	I1216 02:59:38.643324 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.643331 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:38.643337 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:38.643402 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:38.667065 1848358 cri.go:89] found id: ""
	I1216 02:59:38.667080 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.667087 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:38.667095 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:38.667106 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:38.699522 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:38.699540 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:38.757880 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:38.757898 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:38.774888 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:38.774903 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:38.842015 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:38.834115   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.834681   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836207   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836756   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.838251   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:38.834115   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.834681   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836207   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836756   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.838251   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:38.842025 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:38.842036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:41.405157 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:41.416379 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:41.416447 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:41.446560 1848358 cri.go:89] found id: ""
	I1216 02:59:41.446578 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.446596 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:41.446602 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:41.446675 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:41.483188 1848358 cri.go:89] found id: ""
	I1216 02:59:41.483202 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.483209 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:41.483213 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:41.483274 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:41.516110 1848358 cri.go:89] found id: ""
	I1216 02:59:41.516140 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.516147 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:41.516152 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:41.516218 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:41.540839 1848358 cri.go:89] found id: ""
	I1216 02:59:41.540853 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.540860 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:41.540866 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:41.540926 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:41.566596 1848358 cri.go:89] found id: ""
	I1216 02:59:41.566622 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.566629 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:41.566634 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:41.566706 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:41.590702 1848358 cri.go:89] found id: ""
	I1216 02:59:41.590717 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.590724 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:41.590729 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:41.590791 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:41.616252 1848358 cri.go:89] found id: ""
	I1216 02:59:41.616276 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.616283 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:41.616291 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:41.616303 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:41.645509 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:41.645525 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:41.704141 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:41.704159 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:41.721706 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:41.721725 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:41.783974 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:41.776246   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.776782   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778294   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778717   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.780191   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:41.776246   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.776782   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778294   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778717   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.780191   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:41.783984 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:41.784019 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:44.346692 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:44.357118 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:44.357181 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:44.382575 1848358 cri.go:89] found id: ""
	I1216 02:59:44.382589 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.382596 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:44.382601 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:44.382666 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:44.407349 1848358 cri.go:89] found id: ""
	I1216 02:59:44.407363 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.407370 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:44.407375 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:44.407442 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:44.438660 1848358 cri.go:89] found id: ""
	I1216 02:59:44.438674 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.438681 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:44.438693 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:44.438748 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:44.483154 1848358 cri.go:89] found id: ""
	I1216 02:59:44.483168 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.483175 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:44.483180 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:44.483239 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:44.512253 1848358 cri.go:89] found id: ""
	I1216 02:59:44.512267 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.512274 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:44.512283 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:44.512341 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:44.537396 1848358 cri.go:89] found id: ""
	I1216 02:59:44.537410 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.537427 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:44.537434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:44.537510 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:44.562261 1848358 cri.go:89] found id: ""
	I1216 02:59:44.562275 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.562283 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:44.562291 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:44.562300 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:44.630850 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:44.630877 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:44.660268 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:44.660294 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:44.721274 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:44.721294 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:44.738464 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:44.738482 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:44.804552 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:44.796057   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.796830   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798532   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798943   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.800543   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:44.796057   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.796830   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798532   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798943   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.800543   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:47.304816 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:47.315117 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:47.315178 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:47.344292 1848358 cri.go:89] found id: ""
	I1216 02:59:47.344306 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.344314 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:47.344319 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:47.344381 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:47.367920 1848358 cri.go:89] found id: ""
	I1216 02:59:47.367934 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.367942 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:47.367947 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:47.368017 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:47.392383 1848358 cri.go:89] found id: ""
	I1216 02:59:47.392397 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.392404 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:47.392409 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:47.392473 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:47.415620 1848358 cri.go:89] found id: ""
	I1216 02:59:47.415634 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.415641 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:47.415646 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:47.415703 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:47.454281 1848358 cri.go:89] found id: ""
	I1216 02:59:47.454295 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.454302 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:47.454308 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:47.454367 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:47.487808 1848358 cri.go:89] found id: ""
	I1216 02:59:47.487822 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.487829 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:47.487834 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:47.487893 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:47.515510 1848358 cri.go:89] found id: ""
	I1216 02:59:47.515523 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.515531 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:47.515538 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:47.515551 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:47.582935 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:47.574325   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.575137   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.576881   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.577372   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.578856   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:47.574325   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.575137   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.576881   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.577372   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.578856   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:47.582951 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:47.582963 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:47.644716 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:47.644735 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:47.673055 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:47.673071 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:47.729448 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:47.729467 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:50.247207 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:50.257829 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:50.257894 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:50.282406 1848358 cri.go:89] found id: ""
	I1216 02:59:50.282422 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.282429 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:50.282435 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:50.282497 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:50.307428 1848358 cri.go:89] found id: ""
	I1216 02:59:50.307442 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.307450 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:50.307455 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:50.307514 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:50.332093 1848358 cri.go:89] found id: ""
	I1216 02:59:50.332107 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.332114 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:50.332120 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:50.332179 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:50.357137 1848358 cri.go:89] found id: ""
	I1216 02:59:50.357151 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.357158 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:50.357163 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:50.357227 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:50.380923 1848358 cri.go:89] found id: ""
	I1216 02:59:50.380938 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.380945 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:50.380950 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:50.381008 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:50.404673 1848358 cri.go:89] found id: ""
	I1216 02:59:50.404687 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.404695 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:50.404700 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:50.404762 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:50.428594 1848358 cri.go:89] found id: ""
	I1216 02:59:50.428609 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.428616 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:50.428624 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:50.428634 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:50.511977 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:50.503194   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.503744   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505371   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505827   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.507476   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:50.503194   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.503744   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505371   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505827   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.507476   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:50.511987 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:50.511998 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:50.575372 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:50.575394 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:50.603193 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:50.603215 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:50.660351 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:50.660370 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:53.177329 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:53.187812 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:53.187876 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:53.212765 1848358 cri.go:89] found id: ""
	I1216 02:59:53.212780 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.212787 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:53.212792 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:53.212855 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:53.237571 1848358 cri.go:89] found id: ""
	I1216 02:59:53.237584 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.237591 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:53.237596 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:53.237657 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:53.261989 1848358 cri.go:89] found id: ""
	I1216 02:59:53.262003 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.262010 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:53.262015 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:53.262077 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:53.291843 1848358 cri.go:89] found id: ""
	I1216 02:59:53.291857 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.291864 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:53.291869 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:53.291929 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:53.316569 1848358 cri.go:89] found id: ""
	I1216 02:59:53.316583 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.316590 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:53.316595 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:53.316655 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:53.340200 1848358 cri.go:89] found id: ""
	I1216 02:59:53.340214 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.340221 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:53.340226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:53.340284 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:53.364767 1848358 cri.go:89] found id: ""
	I1216 02:59:53.364782 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.364789 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:53.364796 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:53.364806 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:53.423540 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:53.423559 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:53.440975 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:53.440990 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:53.518181 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:53.509741   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.510408   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512145   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512724   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.514366   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:53.509741   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.510408   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512145   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512724   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.514366   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:53.518190 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:53.518201 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:53.580231 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:53.580250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:56.109099 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:56.119430 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:56.119493 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:56.144050 1848358 cri.go:89] found id: ""
	I1216 02:59:56.144064 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.144072 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:56.144077 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:56.144137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:56.168768 1848358 cri.go:89] found id: ""
	I1216 02:59:56.168783 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.168790 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:56.168794 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:56.168858 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:56.193611 1848358 cri.go:89] found id: ""
	I1216 02:59:56.193625 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.193633 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:56.193637 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:56.193694 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:56.218383 1848358 cri.go:89] found id: ""
	I1216 02:59:56.218396 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.218415 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:56.218420 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:56.218532 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:56.244850 1848358 cri.go:89] found id: ""
	I1216 02:59:56.244864 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.244871 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:56.244888 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:56.244960 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:56.272142 1848358 cri.go:89] found id: ""
	I1216 02:59:56.272167 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.272174 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:56.272181 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:56.272252 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:56.296464 1848358 cri.go:89] found id: ""
	I1216 02:59:56.296478 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.296485 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:56.296493 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:56.296503 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:56.351797 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:56.351818 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:56.368635 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:56.368655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:56.433327 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:56.425076   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.425853   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.427469   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.428121   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.429570   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:56.425076   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.425853   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.427469   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.428121   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.429570   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:56.433336 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:56.433346 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:56.509361 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:56.509380 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:59.037187 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:59.047286 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:59.047351 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:59.072817 1848358 cri.go:89] found id: ""
	I1216 02:59:59.072831 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.072838 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:59.072843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:59.072914 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:59.098681 1848358 cri.go:89] found id: ""
	I1216 02:59:59.098696 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.098708 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:59.098713 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:59.098774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:59.124932 1848358 cri.go:89] found id: ""
	I1216 02:59:59.124945 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.124953 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:59.124958 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:59.125017 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:59.149561 1848358 cri.go:89] found id: ""
	I1216 02:59:59.149575 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.149581 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:59.149586 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:59.149646 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:59.174402 1848358 cri.go:89] found id: ""
	I1216 02:59:59.174417 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.174426 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:59.174431 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:59.174497 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:59.199717 1848358 cri.go:89] found id: ""
	I1216 02:59:59.199732 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.199740 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:59.199745 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:59.199812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:59.225754 1848358 cri.go:89] found id: ""
	I1216 02:59:59.225768 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.225787 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:59.225795 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:59.225806 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:59.288033 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:59.288058 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:59.316114 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:59.316130 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:59.373962 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:59.373981 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:59.390958 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:59.390978 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:59.466112 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:59.455145   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.456493   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458087   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458370   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.462047   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:59.455145   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.456493   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458087   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458370   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.462047   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:01.968417 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:01.996618 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:01.996689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:02.075342 1848358 cri.go:89] found id: ""
	I1216 03:00:02.075366 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.075373 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:02.075379 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:02.075457 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:02.107614 1848358 cri.go:89] found id: ""
	I1216 03:00:02.107629 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.107637 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:02.107646 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:02.107720 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:02.137752 1848358 cri.go:89] found id: ""
	I1216 03:00:02.137768 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.137776 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:02.137782 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:02.137853 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:02.169435 1848358 cri.go:89] found id: ""
	I1216 03:00:02.169452 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.169459 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:02.169465 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:02.169546 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:02.198391 1848358 cri.go:89] found id: ""
	I1216 03:00:02.198423 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.198431 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:02.198438 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:02.198511 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:02.227862 1848358 cri.go:89] found id: ""
	I1216 03:00:02.227877 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.227885 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:02.227891 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:02.227959 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:02.256236 1848358 cri.go:89] found id: ""
	I1216 03:00:02.256251 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.256269 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:02.256278 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:02.256290 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:02.315559 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:02.315582 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:02.334230 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:02.334248 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:02.404903 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:02.395443   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.396222   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.398711   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.399382   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.400828   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:02.395443   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.396222   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.398711   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.399382   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.400828   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:02.404912 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:02.404923 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:02.469074 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:02.469095 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:05.003993 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:05.018300 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:05.018420 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:05.047301 1848358 cri.go:89] found id: ""
	I1216 03:00:05.047316 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.047323 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:05.047335 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:05.047400 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:05.072682 1848358 cri.go:89] found id: ""
	I1216 03:00:05.072697 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.072704 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:05.072709 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:05.072770 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:05.102478 1848358 cri.go:89] found id: ""
	I1216 03:00:05.102493 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.102502 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:05.102507 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:05.102578 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:05.132728 1848358 cri.go:89] found id: ""
	I1216 03:00:05.132743 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.132750 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:05.132756 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:05.132825 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:05.158706 1848358 cri.go:89] found id: ""
	I1216 03:00:05.158721 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.158728 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:05.158733 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:05.158795 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:05.184666 1848358 cri.go:89] found id: ""
	I1216 03:00:05.184681 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.184688 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:05.184694 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:05.184756 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:05.216197 1848358 cri.go:89] found id: ""
	I1216 03:00:05.216213 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.216221 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:05.216229 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:05.216239 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:05.278419 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:05.278439 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:05.309753 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:05.309771 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:05.366862 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:05.366880 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:05.384427 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:05.384446 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:05.452157 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:05.443910   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.444698   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446307   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446727   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.448188   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:05.443910   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.444698   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446307   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446727   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.448188   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:07.952402 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:07.967145 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:07.967225 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:07.998164 1848358 cri.go:89] found id: ""
	I1216 03:00:07.998178 1848358 logs.go:282] 0 containers: []
	W1216 03:00:07.998185 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:07.998191 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:07.998251 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:08.032873 1848358 cri.go:89] found id: ""
	I1216 03:00:08.032889 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.032896 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:08.032901 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:08.032964 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:08.059832 1848358 cri.go:89] found id: ""
	I1216 03:00:08.059846 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.059854 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:08.059859 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:08.059933 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:08.087232 1848358 cri.go:89] found id: ""
	I1216 03:00:08.087246 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.087253 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:08.087258 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:08.087316 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:08.114253 1848358 cri.go:89] found id: ""
	I1216 03:00:08.114267 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.114274 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:08.114280 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:08.114343 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:08.139972 1848358 cri.go:89] found id: ""
	I1216 03:00:08.139987 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.139994 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:08.139999 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:08.140141 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:08.165613 1848358 cri.go:89] found id: ""
	I1216 03:00:08.165628 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.165637 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:08.165645 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:08.165655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:08.221696 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:08.221715 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:08.240189 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:08.240206 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:08.320945 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:08.311750   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.312401   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314217   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314799   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.316399   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:08.311750   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.312401   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314217   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314799   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.316399   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:08.320954 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:08.320964 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:08.384243 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:08.384275 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:10.913864 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:10.926998 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:10.927108 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:10.962440 1848358 cri.go:89] found id: ""
	I1216 03:00:10.962454 1848358 logs.go:282] 0 containers: []
	W1216 03:00:10.962461 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:10.962466 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:10.962526 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:11.004569 1848358 cri.go:89] found id: ""
	I1216 03:00:11.004589 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.004598 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:11.004610 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:11.005096 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:11.034401 1848358 cri.go:89] found id: ""
	I1216 03:00:11.034415 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.034429 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:11.034434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:11.034508 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:11.065292 1848358 cri.go:89] found id: ""
	I1216 03:00:11.065309 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.065317 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:11.065325 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:11.065394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:11.092043 1848358 cri.go:89] found id: ""
	I1216 03:00:11.092057 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.092065 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:11.092070 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:11.092163 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:11.121914 1848358 cri.go:89] found id: ""
	I1216 03:00:11.121929 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.121936 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:11.121942 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:11.122014 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:11.147863 1848358 cri.go:89] found id: ""
	I1216 03:00:11.147879 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.147886 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:11.147894 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:11.147906 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:11.213267 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:11.213287 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:11.231545 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:11.231561 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:11.303516 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:11.294839   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.295358   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.296660   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.297169   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.298964   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:11.294839   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.295358   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.296660   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.297169   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.298964   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:11.303525 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:11.303544 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:11.375152 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:11.375181 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:13.905997 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:13.916685 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:13.916754 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:13.946670 1848358 cri.go:89] found id: ""
	I1216 03:00:13.946698 1848358 logs.go:282] 0 containers: []
	W1216 03:00:13.946705 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:13.946711 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:13.946782 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:13.978544 1848358 cri.go:89] found id: ""
	I1216 03:00:13.978558 1848358 logs.go:282] 0 containers: []
	W1216 03:00:13.978565 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:13.978570 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:13.978630 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:14.010045 1848358 cri.go:89] found id: ""
	I1216 03:00:14.010060 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.010068 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:14.010073 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:14.010148 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:14.039695 1848358 cri.go:89] found id: ""
	I1216 03:00:14.039709 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.039717 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:14.039722 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:14.039786 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:14.065918 1848358 cri.go:89] found id: ""
	I1216 03:00:14.065932 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.065939 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:14.065944 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:14.066002 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:14.092594 1848358 cri.go:89] found id: ""
	I1216 03:00:14.092607 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.092615 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:14.092620 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:14.092684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:14.117022 1848358 cri.go:89] found id: ""
	I1216 03:00:14.117036 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.117043 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:14.117052 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:14.117063 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:14.145392 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:14.145409 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:14.201319 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:14.201338 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:14.218382 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:14.218397 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:14.286945 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:14.279281   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.279802   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281416   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281996   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.283003   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:14.279281   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.279802   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281416   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281996   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.283003   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:14.286956 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:14.286968 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:16.848830 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:16.859224 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:16.859288 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:16.900559 1848358 cri.go:89] found id: ""
	I1216 03:00:16.900573 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.900580 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:16.900586 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:16.900660 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:16.925198 1848358 cri.go:89] found id: ""
	I1216 03:00:16.925213 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.925221 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:16.925226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:16.925288 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:16.968532 1848358 cri.go:89] found id: ""
	I1216 03:00:16.968545 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.968552 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:16.968557 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:16.968620 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:17.001327 1848358 cri.go:89] found id: ""
	I1216 03:00:17.001343 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.001351 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:17.001357 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:17.001427 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:17.029828 1848358 cri.go:89] found id: ""
	I1216 03:00:17.029843 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.029850 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:17.029855 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:17.029917 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:17.055865 1848358 cri.go:89] found id: ""
	I1216 03:00:17.055880 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.055887 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:17.055892 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:17.055956 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:17.081782 1848358 cri.go:89] found id: ""
	I1216 03:00:17.081796 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.081804 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:17.081812 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:17.081823 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:17.137664 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:17.137684 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:17.155387 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:17.155413 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:17.223693 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:17.215814   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.216359   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.217875   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.218284   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.219788   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:17.215814   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.216359   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.217875   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.218284   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.219788   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:17.223704 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:17.223715 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:17.285895 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:17.285915 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:19.819792 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:19.830531 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:19.830595 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:19.855374 1848358 cri.go:89] found id: ""
	I1216 03:00:19.855388 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.855395 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:19.855400 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:19.855459 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:19.880613 1848358 cri.go:89] found id: ""
	I1216 03:00:19.880627 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.880634 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:19.880639 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:19.880701 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:19.905217 1848358 cri.go:89] found id: ""
	I1216 03:00:19.905231 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.905238 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:19.905243 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:19.905306 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:19.938230 1848358 cri.go:89] found id: ""
	I1216 03:00:19.938245 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.938252 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:19.938257 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:19.938318 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:19.972308 1848358 cri.go:89] found id: ""
	I1216 03:00:19.972322 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.972330 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:19.972335 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:19.972396 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:20.009826 1848358 cri.go:89] found id: ""
	I1216 03:00:20.009843 1848358 logs.go:282] 0 containers: []
	W1216 03:00:20.009851 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:20.009857 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:20.009931 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:20.047016 1848358 cri.go:89] found id: ""
	I1216 03:00:20.047031 1848358 logs.go:282] 0 containers: []
	W1216 03:00:20.047075 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:20.047084 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:20.047095 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:20.105420 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:20.105444 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:20.123806 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:20.123824 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:20.193387 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:20.184716   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.185705   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.187519   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.188104   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.189189   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:20.184716   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.185705   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.187519   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.188104   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.189189   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:20.193399 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:20.193410 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:20.256212 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:20.256232 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:22.788953 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:22.799143 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:22.799205 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:22.824912 1848358 cri.go:89] found id: ""
	I1216 03:00:22.824926 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.824933 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:22.824938 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:22.824999 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:22.848993 1848358 cri.go:89] found id: ""
	I1216 03:00:22.849007 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.849014 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:22.849019 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:22.849077 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:22.873445 1848358 cri.go:89] found id: ""
	I1216 03:00:22.873467 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.873476 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:22.873481 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:22.873548 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:22.898928 1848358 cri.go:89] found id: ""
	I1216 03:00:22.898952 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.898960 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:22.898965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:22.899088 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:22.924441 1848358 cri.go:89] found id: ""
	I1216 03:00:22.924455 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.924462 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:22.924471 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:22.924536 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:22.972165 1848358 cri.go:89] found id: ""
	I1216 03:00:22.972187 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.972194 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:22.972200 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:22.972272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:23.007998 1848358 cri.go:89] found id: ""
	I1216 03:00:23.008014 1848358 logs.go:282] 0 containers: []
	W1216 03:00:23.008021 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:23.008030 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:23.008041 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:23.074846 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:23.065592   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.066370   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.068447   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.069048   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.070772   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:23.065592   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.066370   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.068447   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.069048   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.070772   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:23.074856 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:23.074867 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:23.141968 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:23.141990 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:23.170755 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:23.170772 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:23.229156 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:23.229176 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:25.746547 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:25.757092 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:25.757177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:25.781744 1848358 cri.go:89] found id: ""
	I1216 03:00:25.781758 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.781765 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:25.781770 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:25.781829 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:25.810185 1848358 cri.go:89] found id: ""
	I1216 03:00:25.810200 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.810207 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:25.810212 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:25.810273 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:25.837797 1848358 cri.go:89] found id: ""
	I1216 03:00:25.837810 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.837818 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:25.837822 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:25.837881 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:25.864444 1848358 cri.go:89] found id: ""
	I1216 03:00:25.864466 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.864474 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:25.864479 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:25.864537 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:25.889170 1848358 cri.go:89] found id: ""
	I1216 03:00:25.889185 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.889192 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:25.889197 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:25.889253 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:25.913381 1848358 cri.go:89] found id: ""
	I1216 03:00:25.913396 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.913403 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:25.913409 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:25.913468 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:25.956168 1848358 cri.go:89] found id: ""
	I1216 03:00:25.956184 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.956191 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:25.956199 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:25.956209 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:25.987017 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:25.987032 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:26.056762 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:26.056783 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:26.074582 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:26.074599 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:26.142533 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:26.133438   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.134117   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.135700   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.136346   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.138045   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:26.133438   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.134117   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.135700   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.136346   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.138045   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:26.142543 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:26.142554 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:28.704757 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:28.715093 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:28.715171 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:28.756309 1848358 cri.go:89] found id: ""
	I1216 03:00:28.756339 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.756350 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:28.756355 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:28.756442 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:28.786013 1848358 cri.go:89] found id: ""
	I1216 03:00:28.786027 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.786033 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:28.786038 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:28.786099 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:28.813243 1848358 cri.go:89] found id: ""
	I1216 03:00:28.813257 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.813264 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:28.813269 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:28.813329 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:28.837627 1848358 cri.go:89] found id: ""
	I1216 03:00:28.837642 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.837649 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:28.837654 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:28.837714 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:28.862744 1848358 cri.go:89] found id: ""
	I1216 03:00:28.862768 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.862775 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:28.862780 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:28.862850 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:28.888763 1848358 cri.go:89] found id: ""
	I1216 03:00:28.888777 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.888784 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:28.888790 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:28.888851 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:28.913212 1848358 cri.go:89] found id: ""
	I1216 03:00:28.913226 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.913234 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:28.913242 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:28.913252 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:28.973937 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:28.973957 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:28.995906 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:28.995924 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:29.068971 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:29.060478   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.060883   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062455   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062780   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.064407   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:29.060478   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.060883   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062455   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062780   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.064407   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:29.068980 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:29.068994 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:29.132688 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:29.132707 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:31.666915 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:31.677125 1848358 kubeadm.go:602] duration metric: took 4m1.758576282s to restartPrimaryControlPlane
	W1216 03:00:31.677186 1848358 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:00:31.677266 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:00:32.091488 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:00:32.105369 1848358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:00:32.113490 1848358 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:00:32.113550 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:00:32.122054 1848358 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:00:32.122064 1848358 kubeadm.go:158] found existing configuration files:
	
	I1216 03:00:32.122120 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 03:00:32.130622 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:00:32.130682 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:00:32.138437 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 03:00:32.146797 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:00:32.146863 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:00:32.155178 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 03:00:32.163734 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:00:32.163795 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:00:32.171993 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 03:00:32.180028 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:00:32.180097 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:00:32.188091 1848358 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:00:32.228785 1848358 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:00:32.228977 1848358 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:00:32.306472 1848358 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:00:32.306542 1848358 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:00:32.306577 1848358 kubeadm.go:319] OS: Linux
	I1216 03:00:32.306630 1848358 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:00:32.306684 1848358 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:00:32.306730 1848358 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:00:32.306783 1848358 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:00:32.306837 1848358 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:00:32.306884 1848358 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:00:32.306934 1848358 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:00:32.306987 1848358 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:00:32.307033 1848358 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:00:32.370232 1848358 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:00:32.370342 1848358 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:00:32.370445 1848358 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:00:32.376940 1848358 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:00:32.380870 1848358 out.go:252]   - Generating certificates and keys ...
	I1216 03:00:32.380973 1848358 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:00:32.381073 1848358 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:00:32.381166 1848358 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:00:32.381227 1848358 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:00:32.381296 1848358 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:00:32.381349 1848358 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:00:32.381411 1848358 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:00:32.381496 1848358 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:00:32.381600 1848358 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:00:32.381683 1848358 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:00:32.381723 1848358 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:00:32.381783 1848358 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:00:32.587867 1848358 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:00:32.728887 1848358 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:00:33.127071 1848358 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:00:33.632583 1848358 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:00:33.851925 1848358 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:00:33.852650 1848358 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:00:33.855273 1848358 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:00:33.858613 1848358 out.go:252]   - Booting up control plane ...
	I1216 03:00:33.858712 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:00:33.858788 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:00:33.858854 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:00:33.878797 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:00:33.879802 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:00:33.887340 1848358 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:00:33.887615 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:00:33.887656 1848358 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:00:34.023686 1848358 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:00:34.027990 1848358 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:04:34.028846 1848358 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005338087s
	I1216 03:04:34.028875 1848358 kubeadm.go:319] 
	I1216 03:04:34.028931 1848358 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:04:34.028963 1848358 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:04:34.029067 1848358 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:04:34.029071 1848358 kubeadm.go:319] 
	I1216 03:04:34.029175 1848358 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:04:34.029206 1848358 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:04:34.029236 1848358 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:04:34.029239 1848358 kubeadm.go:319] 
	I1216 03:04:34.033654 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:04:34.034083 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:04:34.034191 1848358 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:34.034426 1848358 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 03:04:34.034431 1848358 kubeadm.go:319] 
	I1216 03:04:34.034499 1848358 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 03:04:34.034613 1848358 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005338087s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 03:04:34.034714 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:04:34.442103 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:34.455899 1848358 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:34.455954 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:34.464166 1848358 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:34.464176 1848358 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:34.464227 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 03:04:34.472141 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:34.472197 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:34.479703 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 03:04:34.487496 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:34.487553 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:34.495305 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 03:04:34.504218 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:34.504277 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:34.512085 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 03:04:34.520037 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:34.520091 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:34.527590 1848358 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:34.569546 1848358 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:04:34.569597 1848358 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:34.648580 1848358 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:34.648645 1848358 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:04:34.648680 1848358 kubeadm.go:319] OS: Linux
	I1216 03:04:34.648724 1848358 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:34.648775 1848358 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:04:34.648847 1848358 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:34.648894 1848358 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:34.648941 1848358 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:34.648988 1848358 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:34.649031 1848358 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:34.649078 1848358 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:34.649123 1848358 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:04:34.718553 1848358 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:34.718667 1848358 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:34.718765 1848358 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:04:34.725198 1848358 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:04:34.730521 1848358 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:34.730604 1848358 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:34.730670 1848358 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:34.730745 1848358 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:04:34.730804 1848358 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:04:34.730873 1848358 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:04:34.730926 1848358 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:04:34.730988 1848358 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:04:34.731077 1848358 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:04:34.731151 1848358 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:04:34.731222 1848358 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:04:34.731258 1848358 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:04:34.731313 1848358 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:34.775823 1848358 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:35.226979 1848358 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:04:35.500835 1848358 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:35.803186 1848358 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:35.922858 1848358 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:35.923646 1848358 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:35.926392 1848358 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:35.929487 1848358 out.go:252]   - Booting up control plane ...
	I1216 03:04:35.929587 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:35.929670 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:35.930420 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:35.952397 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:35.952501 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:04:35.960726 1848358 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:04:35.961037 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:35.961210 1848358 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:36.110987 1848358 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:04:36.111155 1848358 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:08:36.111000 1848358 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000244075s
	I1216 03:08:36.111025 1848358 kubeadm.go:319] 
	I1216 03:08:36.111095 1848358 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:08:36.111126 1848358 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:08:36.111231 1848358 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:08:36.111235 1848358 kubeadm.go:319] 
	I1216 03:08:36.111337 1848358 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:08:36.111368 1848358 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:08:36.111397 1848358 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:08:36.111401 1848358 kubeadm.go:319] 
	I1216 03:08:36.115184 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:08:36.115598 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:08:36.115704 1848358 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:08:36.115939 1848358 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 03:08:36.115944 1848358 kubeadm.go:319] 
	I1216 03:08:36.116012 1848358 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 03:08:36.116067 1848358 kubeadm.go:403] duration metric: took 12m6.232765178s to StartCluster
	I1216 03:08:36.116112 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:08:36.116177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:08:36.140414 1848358 cri.go:89] found id: ""
	I1216 03:08:36.140430 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.140437 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:08:36.140442 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:08:36.140504 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:08:36.164577 1848358 cri.go:89] found id: ""
	I1216 03:08:36.164590 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.164598 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:08:36.164604 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:08:36.164663 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:08:36.188307 1848358 cri.go:89] found id: ""
	I1216 03:08:36.188321 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.188328 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:08:36.188333 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:08:36.188394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:08:36.213037 1848358 cri.go:89] found id: ""
	I1216 03:08:36.213050 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.213057 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:08:36.213062 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:08:36.213121 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:08:36.239675 1848358 cri.go:89] found id: ""
	I1216 03:08:36.239690 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.239698 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:08:36.239704 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:08:36.239762 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:08:36.262932 1848358 cri.go:89] found id: ""
	I1216 03:08:36.262947 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.262955 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:08:36.262960 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:08:36.263018 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:08:36.288318 1848358 cri.go:89] found id: ""
	I1216 03:08:36.288332 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.288340 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:08:36.288349 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:08:36.288358 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:08:36.350247 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:08:36.350267 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:08:36.380644 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:08:36.380660 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:08:36.436449 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:08:36.436466 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:08:36.457199 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:08:36.457222 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:08:36.526010 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:08:36.517899   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.518716   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520309   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520628   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.522143   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:08:36.517899   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.518716   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520309   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520628   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.522143   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1216 03:08:36.526029 1848358 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 03:08:36.526065 1848358 out.go:285] * 
	W1216 03:08:36.526124 1848358 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:08:36.526137 1848358 out.go:285] * 
	W1216 03:08:36.528271 1848358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:08:36.533177 1848358 out.go:203] 
	W1216 03:08:36.537050 1848358 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:08:36.537112 1848358 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 03:08:36.537136 1848358 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 03:08:36.540537 1848358 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418983774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418998239Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419036154Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419097175Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419108202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419119509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419128805Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419140062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419155980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419187823Z" level=info msg="Connect containerd service"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419497668Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.420076931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439480285Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439840672Z" level=info msg="Start recovering state"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439686821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.443248018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513022632Z" level=info msg="Start event monitor"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513204659Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513279259Z" level=info msg="Start streaming server"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513342856Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513405935Z" level=info msg="runtime interface starting up..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513471920Z" level=info msg="starting plugins..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513539119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:56:28 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.516797790Z" level=info msg="containerd successfully booted in 0.120064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:08:39.931271   21126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:39.932122   21126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:39.933836   21126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:39.934151   21126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:39.935641   21126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 03:08:39 up  8:51,  0 user,  load average: 0.73, 0.37, 0.56
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 03:08:36 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:08:37 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 03:08:37 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:37 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:37 functional-389759 kubelet[20905]: E1216 03:08:37.260550   20905 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:08:37 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:08:37 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:08:37 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 03:08:37 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:37 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:38 functional-389759 kubelet[21002]: E1216 03:08:38.022602   21002 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:08:38 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:08:38 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:08:38 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 16 03:08:38 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:38 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:38 functional-389759 kubelet[21023]: E1216 03:08:38.763860   21023 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:08:38 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:08:38 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:08:39 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 16 03:08:39 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:39 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:08:39 functional-389759 kubelet[21044]: E1216 03:08:39.505303   21044 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:08:39 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:08:39 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (387.359452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-389759 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-389759 apply -f testdata/invalidsvc.yaml: exit status 1 (58.323453ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-389759 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389759 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389759 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389759 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389759 --alsologtostderr -v=1] stderr:
I1216 03:11:09.037185 1865856 out.go:360] Setting OutFile to fd 1 ...
I1216 03:11:09.037331 1865856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:09.037349 1865856 out.go:374] Setting ErrFile to fd 2...
I1216 03:11:09.037367 1865856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:09.037654 1865856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 03:11:09.037932 1865856 mustload.go:66] Loading cluster: functional-389759
I1216 03:11:09.038384 1865856 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:09.038917 1865856 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 03:11:09.055722 1865856 host.go:66] Checking if "functional-389759" exists ...
I1216 03:11:09.056037 1865856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 03:11:09.125574 1865856 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:09.115583263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 03:11:09.125690 1865856 api_server.go:166] Checking apiserver status ...
I1216 03:11:09.125757 1865856 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 03:11:09.125800 1865856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 03:11:09.144977 1865856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
W1216 03:11:09.244366 1865856 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1216 03:11:09.247701 1865856 out.go:179] * The control-plane node functional-389759 apiserver is not running: (state=Stopped)
I1216 03:11:09.250546 1865856 out.go:179]   To start a cluster, run: "minikube start -p functional-389759"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (300.006575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-389759 service hello-node --url                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001:/mount-9p --alsologtostderr -v=1              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh -- ls -la /mount-9p                                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh cat /mount-9p/test-1765854658954368425                                                                                        │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh       │ functional-389759 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2071099183/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh -- ls -la /mount-9p                                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount1 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount1                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount2 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount3 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount1                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh findmnt -T /mount2                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh findmnt -T /mount3                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ mount     │ -p functional-389759 --kill=true                                                                                                                    │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ start     │ -p functional-389759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ start     │ -p functional-389759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ start     │ -p functional-389759 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-389759 --alsologtostderr -v=1                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:11:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:11:08.795955 1865782 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:11:08.796085 1865782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.796091 1865782 out.go:374] Setting ErrFile to fd 2...
	I1216 03:11:08.796094 1865782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.796367 1865782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:11:08.796716 1865782 out.go:368] Setting JSON to false
	I1216 03:11:08.797563 1865782 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":32013,"bootTime":1765822656,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:11:08.797628 1865782 start.go:143] virtualization:  
	I1216 03:11:08.800934 1865782 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 03:11:08.804685 1865782 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:11:08.804777 1865782 notify.go:221] Checking for updates...
	I1216 03:11:08.810552 1865782 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:11:08.813422 1865782 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:11:08.816394 1865782 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:11:08.819266 1865782 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:11:08.822153 1865782 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:11:08.825481 1865782 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 03:11:08.826103 1865782 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:11:08.847078 1865782 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 03:11:08.847214 1865782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:11:08.902895 1865782 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:08.893712123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:11:08.903009 1865782 docker.go:319] overlay module found
	I1216 03:11:08.906147 1865782 out.go:179] * Using the docker driver based on existing profile
	I1216 03:11:08.908914 1865782 start.go:309] selected driver: docker
	I1216 03:11:08.908934 1865782 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:11:08.909031 1865782 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:11:08.909157 1865782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:11:08.974996 1865782 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:08.964969121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:11:08.975521 1865782 cni.go:84] Creating CNI manager for ""
	I1216 03:11:08.975583 1865782 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 03:11:08.975624 1865782 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:11:08.980501 1865782 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418983774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418998239Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419036154Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419097175Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419108202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419119509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419128805Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419140062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419155980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419187823Z" level=info msg="Connect containerd service"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419497668Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.420076931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439480285Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439840672Z" level=info msg="Start recovering state"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439686821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.443248018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513022632Z" level=info msg="Start event monitor"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513204659Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513279259Z" level=info msg="Start streaming server"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513342856Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513405935Z" level=info msg="runtime interface starting up..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513471920Z" level=info msg="starting plugins..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513539119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:56:28 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.516797790Z" level=info msg="containerd successfully booted in 0.120064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:11:10.301857   23400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:10.302735   23400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:10.304734   23400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:10.305511   23400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:10.306953   23400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 03:11:10 up  8:53,  0 user,  load average: 1.01, 0.44, 0.54
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 03:11:07 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:07 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 522.
	Dec 16 03:11:07 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:07 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:08 functional-389759 kubelet[23260]: E1216 03:11:08.019645   23260 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:08 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:08 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:08 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 523.
	Dec 16 03:11:08 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:08 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:08 functional-389759 kubelet[23281]: E1216 03:11:08.759604   23281 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:08 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:08 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:09 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 524.
	Dec 16 03:11:09 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:09 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:09 functional-389759 kubelet[23296]: E1216 03:11:09.519729   23296 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:09 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:09 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 525.
	Dec 16 03:11:10 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:10 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:10 functional-389759 kubelet[23391]: E1216 03:11:10.244067   23391 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (314.291136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 status: exit status 2 (295.146024ms)

                                                
                                                
-- stdout --
	functional-389759
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-389759 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (335.297094ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-389759 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 status -o json: exit status 2 (307.894585ms)

                                                
                                                
-- stdout --
	{"Name":"functional-389759","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-389759 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (294.71034ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-389759 service list                                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ service │ functional-389759 service list -o json                                                                                                              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ service │ functional-389759 service --namespace=default --https --url hello-node                                                                              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ service │ functional-389759 service hello-node --url --format={{.IP}}                                                                                         │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ service │ functional-389759 service hello-node --url                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ ssh     │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ mount   │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001:/mount-9p --alsologtostderr -v=1              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ ssh     │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ functional-389759 ssh -- ls -la /mount-9p                                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ functional-389759 ssh cat /mount-9p/test-1765854658954368425                                                                                        │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ functional-389759 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh     │ functional-389759 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount   │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2071099183/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh     │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ functional-389759 ssh -- ls -la /mount-9p                                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ functional-389759 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount   │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount1 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh     │ functional-389759 ssh findmnt -T /mount1                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount   │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount2 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount   │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount3 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh     │ functional-389759 ssh findmnt -T /mount1                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ functional-389759 ssh findmnt -T /mount2                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ functional-389759 ssh findmnt -T /mount3                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ mount   │ -p functional-389759 --kill=true                                                                                                                    │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:56:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:56:25.844373 1848358 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:56:25.844466 1848358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:56:25.844470 1848358 out.go:374] Setting ErrFile to fd 2...
	I1216 02:56:25.844474 1848358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:56:25.844836 1848358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:56:25.845570 1848358 out.go:368] Setting JSON to false
	I1216 02:56:25.846389 1848358 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":31130,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:56:25.846449 1848358 start.go:143] virtualization:  
	I1216 02:56:25.849867 1848358 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:56:25.854549 1848358 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:56:25.854652 1848358 notify.go:221] Checking for updates...
	I1216 02:56:25.860318 1848358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:56:25.863452 1848358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:56:25.866454 1848358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:56:25.869328 1848358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:56:25.872192 1848358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:56:25.875771 1848358 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:56:25.875865 1848358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:56:25.910877 1848358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:56:25.910989 1848358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:56:25.979751 1848358 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 02:56:25.969640801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:56:25.979847 1848358 docker.go:319] overlay module found
	I1216 02:56:25.984585 1848358 out.go:179] * Using the docker driver based on existing profile
	I1216 02:56:25.987331 1848358 start.go:309] selected driver: docker
	I1216 02:56:25.987339 1848358 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:25.987425 1848358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:56:25.987525 1848358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:56:26.045497 1848358 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 02:56:26.035789712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:56:26.045925 1848358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:56:26.045948 1848358 cni.go:84] Creating CNI manager for ""
	I1216 02:56:26.045996 1848358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:56:26.046044 1848358 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:26.049158 1848358 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:56:26.052095 1848358 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:56:26.055176 1848358 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:56:26.058088 1848358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:56:26.058108 1848358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:56:26.058178 1848358 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:56:26.058195 1848358 cache.go:65] Caching tarball of preloaded images
	I1216 02:56:26.058305 1848358 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:56:26.058312 1848358 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:56:26.058447 1848358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:56:26.078911 1848358 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:56:26.078923 1848358 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:56:26.078944 1848358 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:56:26.078984 1848358 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:56:26.079085 1848358 start.go:364] duration metric: took 83.453µs to acquireMachinesLock for "functional-389759"
	I1216 02:56:26.079107 1848358 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:56:26.079112 1848358 fix.go:54] fixHost starting: 
	I1216 02:56:26.079431 1848358 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:56:26.097178 1848358 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:56:26.097205 1848358 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:56:26.100419 1848358 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:56:26.100450 1848358 machine.go:94] provisionDockerMachine start ...
	I1216 02:56:26.100545 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.118508 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.118832 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.118839 1848358 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:56:26.259148 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:56:26.259164 1848358 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:56:26.259234 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.277500 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.277820 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.277829 1848358 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:56:26.421165 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:56:26.421257 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.440349 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.440644 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.440657 1848358 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:56:26.579508 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:56:26.579533 1848358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:56:26.579555 1848358 ubuntu.go:190] setting up certificates
	I1216 02:56:26.579573 1848358 provision.go:84] configureAuth start
	I1216 02:56:26.579642 1848358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:56:26.598860 1848358 provision.go:143] copyHostCerts
	I1216 02:56:26.598936 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:56:26.598944 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:56:26.599024 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:56:26.599152 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:56:26.599157 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:56:26.599183 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:56:26.599298 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:56:26.599302 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:56:26.599329 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:56:26.599373 1848358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:56:26.772331 1848358 provision.go:177] copyRemoteCerts
	I1216 02:56:26.772384 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:56:26.772421 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.790833 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:26.886672 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:56:26.903453 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:56:26.920711 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 02:56:26.938516 1848358 provision.go:87] duration metric: took 358.921052ms to configureAuth
	I1216 02:56:26.938533 1848358 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:56:26.938730 1848358 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:56:26.938735 1848358 machine.go:97] duration metric: took 838.281264ms to provisionDockerMachine
	I1216 02:56:26.938741 1848358 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:56:26.938751 1848358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:56:26.938797 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:56:26.938840 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.957601 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.062997 1848358 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:56:27.066589 1848358 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:56:27.066608 1848358 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:56:27.066618 1848358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:56:27.066672 1848358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:56:27.066743 1848358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:56:27.066818 1848358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:56:27.066859 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:56:27.074143 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:56:27.091762 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:56:27.109760 1848358 start.go:296] duration metric: took 171.004929ms for postStartSetup
	I1216 02:56:27.109845 1848358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:56:27.109892 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.130041 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.224282 1848358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:56:27.229295 1848358 fix.go:56] duration metric: took 1.150175721s for fixHost
	I1216 02:56:27.229312 1848358 start.go:83] releasing machines lock for "functional-389759", held for 1.150220136s
	I1216 02:56:27.229388 1848358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:56:27.246922 1848358 ssh_runner.go:195] Run: cat /version.json
	I1216 02:56:27.246974 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.247232 1848358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:56:27.247302 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.269086 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.280897 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.370924 1848358 ssh_runner.go:195] Run: systemctl --version
	I1216 02:56:27.469438 1848358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:56:27.474082 1848358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:56:27.474143 1848358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:56:27.482716 1848358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:56:27.482730 1848358 start.go:496] detecting cgroup driver to use...
	I1216 02:56:27.482760 1848358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:56:27.482821 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:56:27.499295 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:56:27.512730 1848358 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:56:27.512788 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:56:27.529084 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:56:27.542618 1848358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:56:27.669326 1848358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:56:27.809661 1848358 docker.go:234] disabling docker service ...
	I1216 02:56:27.809726 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:56:27.825238 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:56:27.839007 1848358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:56:27.961490 1848358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:56:28.085730 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:56:28.099793 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:56:28.115219 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:56:28.124904 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:56:28.134481 1848358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:56:28.134543 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:56:28.143714 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:56:28.152978 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:56:28.161801 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:56:28.170944 1848358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:56:28.179475 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:56:28.188723 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:56:28.197979 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:56:28.206949 1848358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:56:28.214520 1848358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:56:28.222338 1848358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:56:28.339529 1848358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:56:28.517809 1848358 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:56:28.517866 1848358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:56:28.522881 1848358 start.go:564] Will wait 60s for crictl version
	I1216 02:56:28.522937 1848358 ssh_runner.go:195] Run: which crictl
	I1216 02:56:28.526562 1848358 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:56:28.550167 1848358 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:56:28.550234 1848358 ssh_runner.go:195] Run: containerd --version
	I1216 02:56:28.570328 1848358 ssh_runner.go:195] Run: containerd --version
	I1216 02:56:28.596807 1848358 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:56:28.599682 1848358 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:56:28.616323 1848358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:56:28.623466 1848358 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 02:56:28.626293 1848358 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:56:28.626428 1848358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:56:28.626509 1848358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:56:28.651243 1848358 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:56:28.651255 1848358 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:56:28.651317 1848358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:56:28.676192 1848358 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:56:28.676203 1848358 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:56:28.676209 1848358 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:56:28.676312 1848358 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:56:28.676373 1848358 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:56:28.700239 1848358 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 02:56:28.700256 1848358 cni.go:84] Creating CNI manager for ""
	I1216 02:56:28.700264 1848358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:56:28.700272 1848358 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:56:28.700294 1848358 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:56:28.700400 1848358 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:56:28.700473 1848358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:56:28.708593 1848358 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:56:28.708655 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:56:28.716199 1848358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:56:28.728994 1848358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:56:28.742129 1848358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1216 02:56:28.754916 1848358 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:56:28.758765 1848358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:56:28.878289 1848358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:56:29.187922 1848358 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:56:29.187939 1848358 certs.go:195] generating shared ca certs ...
	I1216 02:56:29.187954 1848358 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:56:29.188132 1848358 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:56:29.188175 1848358 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:56:29.188182 1848358 certs.go:257] generating profile certs ...
	I1216 02:56:29.188282 1848358 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:56:29.188344 1848358 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:56:29.188398 1848358 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:56:29.188534 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:56:29.188573 1848358 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:56:29.188580 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:56:29.188615 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:56:29.188648 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:56:29.188671 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:56:29.188729 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:56:29.189416 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:56:29.212546 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:56:29.235562 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:56:29.257334 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:56:29.278410 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:56:29.297639 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:56:29.316055 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:56:29.333992 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:56:29.351802 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:56:29.370197 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:56:29.388624 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:56:29.406325 1848358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:56:29.419477 1848358 ssh_runner.go:195] Run: openssl version
	I1216 02:56:29.425780 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.433488 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:56:29.440931 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.444594 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.444652 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.485312 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:56:29.492681 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.499838 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:56:29.507532 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.511555 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.511621 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.552382 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:56:29.559682 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.566808 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:56:29.574430 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.578016 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.578077 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.619735 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:56:29.627282 1848358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:56:29.630975 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:56:29.674022 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:56:29.716546 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:56:29.760378 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:56:29.801675 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:56:29.842471 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:56:29.883311 1848358 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:29.883412 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:56:29.883472 1848358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:56:29.910518 1848358 cri.go:89] found id: ""
	I1216 02:56:29.910580 1848358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:56:29.918530 1848358 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:56:29.918539 1848358 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:56:29.918590 1848358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:56:29.926051 1848358 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:29.926594 1848358 kubeconfig.go:125] found "functional-389759" server: "https://192.168.49.2:8441"
	I1216 02:56:29.927850 1848358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:56:29.937055 1848358 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 02:41:54.425829655 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 02:56:28.747941655 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 02:56:29.937066 1848358 kubeadm.go:1161] stopping kube-system containers ...
	I1216 02:56:29.937078 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1216 02:56:29.937140 1848358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:56:29.975717 1848358 cri.go:89] found id: ""
	I1216 02:56:29.975778 1848358 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 02:56:29.994835 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:56:30.004346 1848358 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 02:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 02:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 16 02:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 02:46 /etc/kubernetes/scheduler.conf
	
	I1216 02:56:30.004430 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 02:56:30.041702 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 02:56:30.052507 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.052569 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:56:30.061943 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 02:56:30.073420 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.073488 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:56:30.083069 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 02:56:30.092935 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.092994 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:56:30.101587 1848358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:56:30.114178 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:30.166214 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.346212 1848358 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179973709s)
	I1216 02:56:31.346269 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.548322 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.601050 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.649581 1848358 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:56:31.649669 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:32.150228 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:32.649839 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:33.149820 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:33.650613 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:34.150733 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:34.649773 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:35.150705 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:35.649751 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:36.150703 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:36.650627 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:37.150392 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:37.649857 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:38.150375 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:38.650600 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:39.150146 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:39.649848 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:40.150319 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:40.650732 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:41.150402 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:41.649922 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:42.150742 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:42.649781 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:43.150590 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:43.650502 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:44.149866 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:44.649912 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:45.150004 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:45.650501 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:46.149734 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:46.649745 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:47.150639 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:47.649826 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:48.150565 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:48.649896 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:49.149744 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:49.650628 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:50.149885 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:50.649789 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:51.150643 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:51.649902 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:52.149806 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:52.650451 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:53.150140 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:53.649767 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:54.150751 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:54.650468 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:55.149878 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:55.650629 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:56.150781 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:56.650556 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:57.149864 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:57.650766 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:58.150741 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:58.649892 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:59.150551 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:59.650283 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:00.150247 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:00.650607 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:01.150638 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:01.650253 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:02.149858 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:02.650117 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:03.149960 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:03.649720 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:04.150726 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:04.650425 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:05.149866 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:05.649851 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:06.150611 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:06.650200 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:07.150444 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:07.650600 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:08.149853 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:08.650701 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:09.150579 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:09.649862 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:10.149858 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:10.650393 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:11.150022 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:11.649819 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:12.150562 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:12.649775 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:13.150489 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:13.650396 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:14.149848 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:14.649998 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:15.149945 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:15.649800 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:16.149886 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:16.650049 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:17.149847 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:17.649836 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:18.149898 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:18.649853 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:19.149883 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:19.649825 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:20.149732 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:20.650204 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:21.149852 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:21.649824 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:22.150472 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:22.650452 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:23.150780 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:23.650556 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:24.149887 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:24.650458 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:25.150518 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:25.650351 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:26.149849 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:26.650701 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:27.150612 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:27.650232 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:28.150399 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:28.650537 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:29.150626 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:29.650514 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:30.150439 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:30.650333 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:31.149886 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:31.650315 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:31.650394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:31.674930 1848358 cri.go:89] found id: ""
	I1216 02:57:31.674944 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.674951 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:31.674956 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:31.675016 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:31.714000 1848358 cri.go:89] found id: ""
	I1216 02:57:31.714013 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.714021 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:31.714026 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:31.714086 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:31.747840 1848358 cri.go:89] found id: ""
	I1216 02:57:31.747854 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.747861 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:31.747866 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:31.747926 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:31.773860 1848358 cri.go:89] found id: ""
	I1216 02:57:31.773874 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.773886 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:31.773891 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:31.773953 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:31.802242 1848358 cri.go:89] found id: ""
	I1216 02:57:31.802256 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.802263 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:31.802268 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:31.802327 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:31.827140 1848358 cri.go:89] found id: ""
	I1216 02:57:31.827170 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.827177 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:31.827183 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:31.827250 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:31.851813 1848358 cri.go:89] found id: ""
	I1216 02:57:31.851827 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.851834 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:31.851841 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:31.851852 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:31.907296 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:31.907315 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:31.924742 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:31.924759 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:31.990670 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:31.980837   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.981269   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.984774   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.985315   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.986770   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:31.980837   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.981269   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.984774   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.985315   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.986770   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:31.990681 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:31.990692 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:32.056720 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:32.056741 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:34.586741 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:34.596594 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:34.596656 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:34.624415 1848358 cri.go:89] found id: ""
	I1216 02:57:34.624430 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.624437 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:34.624454 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:34.624529 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:34.648856 1848358 cri.go:89] found id: ""
	I1216 02:57:34.648877 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.648884 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:34.648889 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:34.648952 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:34.674838 1848358 cri.go:89] found id: ""
	I1216 02:57:34.674852 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.674859 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:34.674864 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:34.674938 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:34.720068 1848358 cri.go:89] found id: ""
	I1216 02:57:34.720082 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.720089 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:34.720093 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:34.720152 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:34.749510 1848358 cri.go:89] found id: ""
	I1216 02:57:34.749525 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.749531 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:34.749541 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:34.749603 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:34.776711 1848358 cri.go:89] found id: ""
	I1216 02:57:34.776725 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.776732 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:34.776737 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:34.776797 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:34.801539 1848358 cri.go:89] found id: ""
	I1216 02:57:34.801552 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.801560 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:34.801568 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:34.801578 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:34.857992 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:34.858012 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:34.876290 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:34.876307 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:34.948190 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:34.939256   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.940046   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.941775   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.942456   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.944096   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:34.939256   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.940046   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.941775   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.942456   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.944096   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:34.948202 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:34.948213 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:35.015139 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:35.015162 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:37.549752 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:37.560125 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:37.560194 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:37.585130 1848358 cri.go:89] found id: ""
	I1216 02:57:37.585144 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.585151 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:37.585156 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:37.585216 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:37.610009 1848358 cri.go:89] found id: ""
	I1216 02:57:37.610023 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.610030 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:37.610035 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:37.610096 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:37.635414 1848358 cri.go:89] found id: ""
	I1216 02:57:37.635429 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.635436 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:37.635441 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:37.635503 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:37.660026 1848358 cri.go:89] found id: ""
	I1216 02:57:37.660046 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.660053 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:37.660059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:37.660119 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:37.702568 1848358 cri.go:89] found id: ""
	I1216 02:57:37.702583 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.702590 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:37.702595 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:37.702659 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:37.735671 1848358 cri.go:89] found id: ""
	I1216 02:57:37.735685 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.735693 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:37.735698 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:37.735766 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:37.764451 1848358 cri.go:89] found id: ""
	I1216 02:57:37.764465 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.764472 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:37.764481 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:37.764492 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:37.781790 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:37.781808 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:37.850130 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:37.841387   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.841981   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.843649   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845020   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845734   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:37.841387   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.841981   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.843649   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845020   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845734   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:37.850150 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:37.850161 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:37.912286 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:37.912306 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:37.947545 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:37.947561 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:40.504032 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:40.514627 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:40.514689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:40.543498 1848358 cri.go:89] found id: ""
	I1216 02:57:40.543513 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.543520 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:40.543524 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:40.543593 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:40.568106 1848358 cri.go:89] found id: ""
	I1216 02:57:40.568120 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.568127 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:40.568132 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:40.568190 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:40.592290 1848358 cri.go:89] found id: ""
	I1216 02:57:40.592304 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.592317 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:40.592322 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:40.592382 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:40.617796 1848358 cri.go:89] found id: ""
	I1216 02:57:40.617811 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.617818 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:40.617823 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:40.617882 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:40.643710 1848358 cri.go:89] found id: ""
	I1216 02:57:40.643725 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.643732 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:40.643737 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:40.643811 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:40.672711 1848358 cri.go:89] found id: ""
	I1216 02:57:40.672731 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.672738 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:40.672743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:40.672802 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:40.704590 1848358 cri.go:89] found id: ""
	I1216 02:57:40.704604 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.704611 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:40.704620 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:40.704630 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:40.769622 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:40.769642 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:40.786992 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:40.787010 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:40.853579 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:40.844241   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.845343   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847164   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847823   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.849606   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:40.844241   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.845343   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847164   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847823   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.849606   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:40.853590 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:40.853600 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:40.915814 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:40.915833 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:43.448229 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:43.458340 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:43.458399 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:43.481954 1848358 cri.go:89] found id: ""
	I1216 02:57:43.481967 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.481974 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:43.481979 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:43.482037 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:43.507588 1848358 cri.go:89] found id: ""
	I1216 02:57:43.507603 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.507610 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:43.507614 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:43.507684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:43.533164 1848358 cri.go:89] found id: ""
	I1216 02:57:43.533179 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.533188 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:43.533193 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:43.533255 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:43.558139 1848358 cri.go:89] found id: ""
	I1216 02:57:43.558152 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.558159 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:43.558164 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:43.558221 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:43.587218 1848358 cri.go:89] found id: ""
	I1216 02:57:43.587244 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.587251 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:43.587256 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:43.587315 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:43.613584 1848358 cri.go:89] found id: ""
	I1216 02:57:43.613598 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.613605 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:43.613610 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:43.613691 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:43.645887 1848358 cri.go:89] found id: ""
	I1216 02:57:43.645901 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.645908 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:43.645916 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:43.645928 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:43.662557 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:43.662574 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:43.745017 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:43.735622   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.736621   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738304   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738872   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.740427   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:43.735622   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.736621   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738304   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738872   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.740427   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:43.745029 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:43.745040 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:43.808792 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:43.808811 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:43.837682 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:43.837698 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:46.396229 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:46.406230 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:46.406302 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:46.429707 1848358 cri.go:89] found id: ""
	I1216 02:57:46.429721 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.429728 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:46.429733 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:46.429796 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:46.454076 1848358 cri.go:89] found id: ""
	I1216 02:57:46.454090 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.454097 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:46.454101 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:46.454159 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:46.479472 1848358 cri.go:89] found id: ""
	I1216 02:57:46.479486 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.479493 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:46.479498 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:46.479557 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:46.505579 1848358 cri.go:89] found id: ""
	I1216 02:57:46.505592 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.505599 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:46.505605 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:46.505665 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:46.530373 1848358 cri.go:89] found id: ""
	I1216 02:57:46.530387 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.530394 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:46.530399 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:46.530464 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:46.554723 1848358 cri.go:89] found id: ""
	I1216 02:57:46.554736 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.554743 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:46.554748 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:46.554808 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:46.579147 1848358 cri.go:89] found id: ""
	I1216 02:57:46.579164 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.579171 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:46.579179 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:46.579189 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:46.634449 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:46.634473 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:46.651968 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:46.651988 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:46.739219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:46.722068   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.723633   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.732527   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.733244   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.734892   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:46.722068   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.723633   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.732527   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.733244   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.734892   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:46.739239 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:46.739250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:46.812956 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:46.812976 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:49.345440 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:49.356029 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:49.356092 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:49.381514 1848358 cri.go:89] found id: ""
	I1216 02:57:49.381528 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.381535 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:49.381540 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:49.381608 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:49.411765 1848358 cri.go:89] found id: ""
	I1216 02:57:49.411779 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.411786 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:49.411791 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:49.411854 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:49.440610 1848358 cri.go:89] found id: ""
	I1216 02:57:49.440624 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.440631 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:49.440637 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:49.440705 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:49.470688 1848358 cri.go:89] found id: ""
	I1216 02:57:49.470702 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.470709 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:49.470714 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:49.470774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:49.497170 1848358 cri.go:89] found id: ""
	I1216 02:57:49.497184 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.497191 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:49.497196 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:49.497254 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:49.521925 1848358 cri.go:89] found id: ""
	I1216 02:57:49.521940 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.521947 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:49.521952 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:49.522011 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:49.546344 1848358 cri.go:89] found id: ""
	I1216 02:57:49.546358 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.546366 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:49.546374 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:49.546385 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:49.602407 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:49.602426 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:49.619246 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:49.619263 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:49.683476 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:49.674979   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.675736   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677328   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677797   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.679458   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:49.674979   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.675736   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677328   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677797   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.679458   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:49.683488 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:49.683499 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:49.752732 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:49.752753 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:52.289101 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:52.300210 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:52.300272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:52.327757 1848358 cri.go:89] found id: ""
	I1216 02:57:52.327772 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.327779 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:52.327784 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:52.327842 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:52.352750 1848358 cri.go:89] found id: ""
	I1216 02:57:52.352764 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.352771 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:52.352776 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:52.352834 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:52.377100 1848358 cri.go:89] found id: ""
	I1216 02:57:52.377114 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.377135 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:52.377140 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:52.377210 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:52.401376 1848358 cri.go:89] found id: ""
	I1216 02:57:52.401390 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.401397 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:52.401402 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:52.401462 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:52.428592 1848358 cri.go:89] found id: ""
	I1216 02:57:52.428606 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.428613 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:52.428618 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:52.428677 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:52.457192 1848358 cri.go:89] found id: ""
	I1216 02:57:52.457206 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.457213 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:52.457218 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:52.457276 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:52.481473 1848358 cri.go:89] found id: ""
	I1216 02:57:52.481494 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.481501 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:52.481509 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:52.481519 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:52.540087 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:52.540106 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:52.560374 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:52.560391 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:52.628219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:52.619222   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.619906   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.621689   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.622192   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.623773   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:52.619222   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.619906   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.621689   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.622192   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.623773   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:52.628231 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:52.628241 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:52.692110 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:52.692130 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:55.226607 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:55.236818 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:55.236879 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:55.265073 1848358 cri.go:89] found id: ""
	I1216 02:57:55.265087 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.265094 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:55.265099 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:55.265160 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:55.291262 1848358 cri.go:89] found id: ""
	I1216 02:57:55.291276 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.291284 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:55.291289 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:55.291357 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:55.320515 1848358 cri.go:89] found id: ""
	I1216 02:57:55.320539 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.320546 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:55.320551 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:55.320620 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:55.348402 1848358 cri.go:89] found id: ""
	I1216 02:57:55.348426 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.348433 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:55.348438 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:55.348500 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:55.373391 1848358 cri.go:89] found id: ""
	I1216 02:57:55.373405 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.373413 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:55.373418 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:55.373480 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:55.402098 1848358 cri.go:89] found id: ""
	I1216 02:57:55.402111 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.402118 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:55.402124 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:55.402183 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:55.427824 1848358 cri.go:89] found id: ""
	I1216 02:57:55.427838 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.427845 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:55.427853 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:55.427863 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:55.497187 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:55.497216 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:55.526960 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:55.526981 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:55.585085 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:55.585105 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:55.602223 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:55.602241 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:55.671427 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:55.662836   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.663796   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665445   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665748   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.667112   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:55.662836   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.663796   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665445   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665748   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.667112   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:58.171689 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:58.181822 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:58.181885 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:58.206129 1848358 cri.go:89] found id: ""
	I1216 02:57:58.206143 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.206150 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:58.206155 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:58.206214 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:58.230940 1848358 cri.go:89] found id: ""
	I1216 02:57:58.230954 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.230960 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:58.230966 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:58.231024 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:58.256698 1848358 cri.go:89] found id: ""
	I1216 02:57:58.256712 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.256720 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:58.256724 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:58.256788 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:58.281370 1848358 cri.go:89] found id: ""
	I1216 02:57:58.281385 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.281392 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:58.281396 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:58.281456 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:58.313032 1848358 cri.go:89] found id: ""
	I1216 02:57:58.313046 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.313054 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:58.313059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:58.313124 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:58.337968 1848358 cri.go:89] found id: ""
	I1216 02:57:58.337982 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.337989 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:58.337994 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:58.338052 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:58.367215 1848358 cri.go:89] found id: ""
	I1216 02:57:58.367231 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.367239 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:58.367247 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:58.367259 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:58.433078 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:58.423612   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.424320   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426139   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426759   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.428489   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:58.423612   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.424320   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426139   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426759   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.428489   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:58.433088 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:58.433099 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:58.496751 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:58.496771 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:58.528345 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:58.528362 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:58.585231 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:58.585249 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:01.103256 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:01.114505 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:01.114572 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:01.141817 1848358 cri.go:89] found id: ""
	I1216 02:58:01.141831 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.141838 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:01.141843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:01.141908 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:01.170638 1848358 cri.go:89] found id: ""
	I1216 02:58:01.170653 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.170660 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:01.170667 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:01.170733 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:01.197958 1848358 cri.go:89] found id: ""
	I1216 02:58:01.197973 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.197980 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:01.197986 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:01.198051 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:01.225715 1848358 cri.go:89] found id: ""
	I1216 02:58:01.225731 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.225738 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:01.225744 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:01.225803 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:01.256157 1848358 cri.go:89] found id: ""
	I1216 02:58:01.256171 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.256178 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:01.256184 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:01.256244 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:01.281610 1848358 cri.go:89] found id: ""
	I1216 02:58:01.281625 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.281633 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:01.281638 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:01.281702 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:01.306348 1848358 cri.go:89] found id: ""
	I1216 02:58:01.306363 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.306370 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:01.306377 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:01.306388 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:01.335207 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:01.335224 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:01.392222 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:01.392242 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:01.408874 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:01.408890 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:01.472601 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:01.464071   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.464670   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.466724   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.467464   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.468593   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:01.464071   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.464670   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.466724   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.467464   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.468593   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:01.472613 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:01.472626 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:04.035738 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:04.046578 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:04.046661 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:04.072441 1848358 cri.go:89] found id: ""
	I1216 02:58:04.072456 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.072463 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:04.072468 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:04.072531 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:04.103113 1848358 cri.go:89] found id: ""
	I1216 02:58:04.103128 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.103135 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:04.103139 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:04.103208 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:04.127981 1848358 cri.go:89] found id: ""
	I1216 02:58:04.127995 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.128002 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:04.128007 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:04.128067 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:04.153050 1848358 cri.go:89] found id: ""
	I1216 02:58:04.153065 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.153072 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:04.153077 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:04.153139 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:04.176840 1848358 cri.go:89] found id: ""
	I1216 02:58:04.176854 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.176879 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:04.176885 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:04.176954 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:04.205747 1848358 cri.go:89] found id: ""
	I1216 02:58:04.205771 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.205779 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:04.205784 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:04.205853 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:04.234453 1848358 cri.go:89] found id: ""
	I1216 02:58:04.234467 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.234474 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:04.234483 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:04.234505 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:04.294713 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:04.294732 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:04.312011 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:04.312029 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:04.378295 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:04.369406   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.370236   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372024   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372637   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.374434   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:04.369406   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.370236   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372024   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372637   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.374434   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:04.378314 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:04.378325 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:04.440962 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:04.440984 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:06.970088 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:06.983751 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:06.983819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:07.013657 1848358 cri.go:89] found id: ""
	I1216 02:58:07.013672 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.013679 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:07.013684 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:07.013752 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:07.038882 1848358 cri.go:89] found id: ""
	I1216 02:58:07.038896 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.038904 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:07.038909 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:07.038968 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:07.064215 1848358 cri.go:89] found id: ""
	I1216 02:58:07.064230 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.064237 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:07.064242 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:07.064304 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:07.088144 1848358 cri.go:89] found id: ""
	I1216 02:58:07.088158 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.088165 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:07.088170 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:07.088229 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:07.112044 1848358 cri.go:89] found id: ""
	I1216 02:58:07.112059 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.112066 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:07.112071 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:07.112137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:07.138570 1848358 cri.go:89] found id: ""
	I1216 02:58:07.138586 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.138593 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:07.138599 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:07.138658 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:07.166931 1848358 cri.go:89] found id: ""
	I1216 02:58:07.166945 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.166952 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:07.166959 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:07.166973 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:07.197292 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:07.197308 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:07.255003 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:07.255023 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:07.273531 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:07.273547 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:07.338842 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:07.330204   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.330976   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.332722   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.333328   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.334951   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:07.330204   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.330976   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.332722   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.333328   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.334951   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:07.338852 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:07.338863 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:09.902725 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:09.913150 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:09.913213 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:09.946614 1848358 cri.go:89] found id: ""
	I1216 02:58:09.946627 1848358 logs.go:282] 0 containers: []
	W1216 02:58:09.946634 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:09.946639 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:09.946703 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:09.975470 1848358 cri.go:89] found id: ""
	I1216 02:58:09.975484 1848358 logs.go:282] 0 containers: []
	W1216 02:58:09.975491 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:09.975496 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:09.975557 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:10.002745 1848358 cri.go:89] found id: ""
	I1216 02:58:10.002773 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.002782 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:10.002787 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:10.002866 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:10.035489 1848358 cri.go:89] found id: ""
	I1216 02:58:10.035504 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.035512 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:10.035517 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:10.035581 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:10.062019 1848358 cri.go:89] found id: ""
	I1216 02:58:10.062044 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.062052 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:10.062059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:10.062139 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:10.088952 1848358 cri.go:89] found id: ""
	I1216 02:58:10.088977 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.088986 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:10.088991 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:10.089061 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:10.115714 1848358 cri.go:89] found id: ""
	I1216 02:58:10.115736 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.115744 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:10.115752 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:10.115762 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:10.172504 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:10.172524 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:10.190804 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:10.190821 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:10.258662 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:10.249875   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.250514   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252114   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252638   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.254176   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:10.249875   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.250514   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252114   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252638   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.254176   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:10.258675 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:10.258686 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:10.321543 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:10.321562 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:12.849334 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:12.859284 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:12.859345 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:12.884624 1848358 cri.go:89] found id: ""
	I1216 02:58:12.884640 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.884648 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:12.884653 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:12.884722 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:12.908735 1848358 cri.go:89] found id: ""
	I1216 02:58:12.908749 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.908756 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:12.908761 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:12.908819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:12.944827 1848358 cri.go:89] found id: ""
	I1216 02:58:12.944841 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.944848 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:12.944854 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:12.944917 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:12.974281 1848358 cri.go:89] found id: ""
	I1216 02:58:12.974295 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.974302 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:12.974308 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:12.974367 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:13.008278 1848358 cri.go:89] found id: ""
	I1216 02:58:13.008294 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.008302 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:13.008307 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:13.008376 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:13.034272 1848358 cri.go:89] found id: ""
	I1216 02:58:13.034286 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.034294 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:13.034299 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:13.034361 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:13.064663 1848358 cri.go:89] found id: ""
	I1216 02:58:13.064688 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.064695 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:13.064703 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:13.064716 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:13.127826 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:13.127848 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:13.158482 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:13.158498 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:13.218053 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:13.218072 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:13.234830 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:13.234846 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:13.298317 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:13.289893   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.290910   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.291765   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293309   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293580   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:13.289893   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.290910   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.291765   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293309   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293580   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:15.798590 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:15.809144 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:15.809225 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:15.834683 1848358 cri.go:89] found id: ""
	I1216 02:58:15.834696 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.834704 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:15.834709 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:15.834774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:15.860001 1848358 cri.go:89] found id: ""
	I1216 02:58:15.860030 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.860038 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:15.860042 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:15.860113 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:15.884488 1848358 cri.go:89] found id: ""
	I1216 02:58:15.884503 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.884510 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:15.884515 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:15.884572 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:15.908030 1848358 cri.go:89] found id: ""
	I1216 02:58:15.908045 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.908051 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:15.908056 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:15.908116 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:15.932641 1848358 cri.go:89] found id: ""
	I1216 02:58:15.932654 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.932661 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:15.932666 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:15.932723 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:15.962741 1848358 cri.go:89] found id: ""
	I1216 02:58:15.962754 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.962772 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:15.962779 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:15.962836 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:15.990774 1848358 cri.go:89] found id: ""
	I1216 02:58:15.990788 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.990806 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:15.990829 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:15.990838 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:16.067729 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:16.067748 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:16.098615 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:16.098635 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:16.154944 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:16.154963 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:16.172510 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:16.172527 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:16.237380 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:16.229269   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.229868   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231379   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231950   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.233594   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:16.229269   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.229868   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231379   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231950   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.233594   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:18.738100 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:18.751636 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:18.751717 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:18.779608 1848358 cri.go:89] found id: ""
	I1216 02:58:18.779622 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.779629 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:18.779634 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:18.779693 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:18.805721 1848358 cri.go:89] found id: ""
	I1216 02:58:18.805735 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.805742 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:18.805747 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:18.805812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:18.831187 1848358 cri.go:89] found id: ""
	I1216 02:58:18.831203 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.831210 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:18.831215 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:18.831280 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:18.857343 1848358 cri.go:89] found id: ""
	I1216 02:58:18.857367 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.857375 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:18.857380 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:18.857448 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:18.882737 1848358 cri.go:89] found id: ""
	I1216 02:58:18.882751 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.882758 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:18.882765 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:18.882834 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:18.907486 1848358 cri.go:89] found id: ""
	I1216 02:58:18.907500 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.907508 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:18.907513 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:18.907573 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:18.939361 1848358 cri.go:89] found id: ""
	I1216 02:58:18.939375 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.939382 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:18.939390 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:18.939401 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:19.019241 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:19.010485   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.010907   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.012525   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.013150   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.014918   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:19.010485   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.010907   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.012525   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.013150   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.014918   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:19.019251 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:19.019262 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:19.081820 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:19.081842 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:19.110025 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:19.110042 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:19.166216 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:19.166236 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:21.684597 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:21.694910 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:21.694974 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:21.719581 1848358 cri.go:89] found id: ""
	I1216 02:58:21.719595 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.719602 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:21.719607 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:21.719670 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:21.745661 1848358 cri.go:89] found id: ""
	I1216 02:58:21.745675 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.745682 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:21.745688 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:21.745745 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:21.770329 1848358 cri.go:89] found id: ""
	I1216 02:58:21.770342 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.770349 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:21.770354 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:21.770425 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:21.795402 1848358 cri.go:89] found id: ""
	I1216 02:58:21.795416 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.795423 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:21.795434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:21.795492 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:21.821959 1848358 cri.go:89] found id: ""
	I1216 02:58:21.821972 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.821979 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:21.821984 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:21.822043 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:21.845121 1848358 cri.go:89] found id: ""
	I1216 02:58:21.845135 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.845142 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:21.845148 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:21.845209 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:21.868958 1848358 cri.go:89] found id: ""
	I1216 02:58:21.868972 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.868979 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:21.868987 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:21.868997 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:21.932460 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:21.924049   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.924916   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926515   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926825   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.928346   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:21.924049   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.924916   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926515   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926825   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.928346   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:21.932490 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:21.932502 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:22.006384 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:22.006415 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:22.040639 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:22.040655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:22.097981 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:22.098000 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:24.615636 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:24.626423 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:24.626486 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:24.650890 1848358 cri.go:89] found id: ""
	I1216 02:58:24.650904 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.650911 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:24.650916 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:24.650984 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:24.676132 1848358 cri.go:89] found id: ""
	I1216 02:58:24.676146 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.676153 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:24.676158 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:24.676219 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:24.705732 1848358 cri.go:89] found id: ""
	I1216 02:58:24.705746 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.705753 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:24.705758 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:24.705820 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:24.729899 1848358 cri.go:89] found id: ""
	I1216 02:58:24.729914 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.729922 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:24.729927 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:24.729988 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:24.760724 1848358 cri.go:89] found id: ""
	I1216 02:58:24.760744 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.760752 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:24.760756 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:24.760821 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:24.789128 1848358 cri.go:89] found id: ""
	I1216 02:58:24.789144 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.789151 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:24.789157 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:24.789221 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:24.814525 1848358 cri.go:89] found id: ""
	I1216 02:58:24.814539 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.814548 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:24.814555 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:24.814567 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:24.845234 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:24.845251 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:24.904816 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:24.904835 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:24.922721 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:24.922744 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:25.017286 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:25.006539   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.007431   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.009403   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.010041   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.013452   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:25.006539   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.007431   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.009403   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.010041   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.013452   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:25.017298 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:25.017309 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:27.580148 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:27.590499 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:27.590563 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:27.614749 1848358 cri.go:89] found id: ""
	I1216 02:58:27.614764 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.614771 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:27.614776 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:27.614835 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:27.638735 1848358 cri.go:89] found id: ""
	I1216 02:58:27.638749 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.638756 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:27.638762 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:27.638821 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:27.665480 1848358 cri.go:89] found id: ""
	I1216 02:58:27.665495 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.665503 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:27.665508 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:27.665565 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:27.695981 1848358 cri.go:89] found id: ""
	I1216 02:58:27.695996 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.696004 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:27.696009 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:27.696088 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:27.720368 1848358 cri.go:89] found id: ""
	I1216 02:58:27.720390 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.720397 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:27.720403 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:27.720469 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:27.746357 1848358 cri.go:89] found id: ""
	I1216 02:58:27.746371 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.746377 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:27.746383 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:27.746441 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:27.770684 1848358 cri.go:89] found id: ""
	I1216 02:58:27.770708 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.770716 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:27.770724 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:27.770734 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:27.836245 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:27.836265 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:27.865946 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:27.865964 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:27.924653 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:27.924675 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:27.945999 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:27.946015 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:28.027275 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:28.018924   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.019507   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021144   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021655   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.023260   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:28.018924   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.019507   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021144   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021655   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.023260   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:30.527490 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:30.537746 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:30.537811 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:30.562783 1848358 cri.go:89] found id: ""
	I1216 02:58:30.562797 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.562805 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:30.562810 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:30.562882 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:30.587495 1848358 cri.go:89] found id: ""
	I1216 02:58:30.587509 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.587515 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:30.587521 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:30.587583 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:30.611375 1848358 cri.go:89] found id: ""
	I1216 02:58:30.611392 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.611400 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:30.611406 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:30.611472 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:30.635442 1848358 cri.go:89] found id: ""
	I1216 02:58:30.635457 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.635464 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:30.635469 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:30.635527 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:30.659725 1848358 cri.go:89] found id: ""
	I1216 02:58:30.659745 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.659752 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:30.659757 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:30.659819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:30.683639 1848358 cri.go:89] found id: ""
	I1216 02:58:30.683654 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.683661 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:30.683666 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:30.683725 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:30.709231 1848358 cri.go:89] found id: ""
	I1216 02:58:30.709246 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.709252 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:30.709260 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:30.709271 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:30.765116 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:30.765136 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:30.782213 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:30.782230 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:30.843173 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:30.835436   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.836096   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837188   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837822   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.839482   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:30.835436   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.836096   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837188   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837822   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.839482   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:30.843184 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:30.843195 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:30.905457 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:30.905477 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:33.448949 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:33.458942 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:33.459006 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:33.494559 1848358 cri.go:89] found id: ""
	I1216 02:58:33.494573 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.494582 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:33.494602 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:33.494672 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:33.521008 1848358 cri.go:89] found id: ""
	I1216 02:58:33.521028 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.521036 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:33.521041 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:33.521103 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:33.545598 1848358 cri.go:89] found id: ""
	I1216 02:58:33.545613 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.545620 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:33.545625 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:33.545684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:33.573194 1848358 cri.go:89] found id: ""
	I1216 02:58:33.573207 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.573214 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:33.573219 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:33.573284 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:33.597747 1848358 cri.go:89] found id: ""
	I1216 02:58:33.597761 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.597784 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:33.597789 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:33.597859 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:33.621788 1848358 cri.go:89] found id: ""
	I1216 02:58:33.621803 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.621810 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:33.621815 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:33.621892 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:33.646528 1848358 cri.go:89] found id: ""
	I1216 02:58:33.646543 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.646550 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:33.646557 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:33.646567 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:33.708165 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:33.708187 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:33.736001 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:33.736018 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:33.791763 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:33.791786 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:33.808896 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:33.808912 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:33.876753 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:33.868694   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.869434   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871119   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871558   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.873040   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:33.868694   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.869434   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871119   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871558   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.873040   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:36.376982 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:36.386962 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:36.387033 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:36.410927 1848358 cri.go:89] found id: ""
	I1216 02:58:36.410941 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.410948 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:36.410954 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:36.411013 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:36.436158 1848358 cri.go:89] found id: ""
	I1216 02:58:36.436171 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.436179 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:36.436189 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:36.436260 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:36.460716 1848358 cri.go:89] found id: ""
	I1216 02:58:36.460730 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.460737 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:36.460743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:36.460815 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:36.485244 1848358 cri.go:89] found id: ""
	I1216 02:58:36.485258 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.485266 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:36.485272 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:36.485335 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:36.509347 1848358 cri.go:89] found id: ""
	I1216 02:58:36.509361 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.509368 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:36.509374 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:36.509434 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:36.534352 1848358 cri.go:89] found id: ""
	I1216 02:58:36.534367 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.534374 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:36.534419 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:36.534481 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:36.560075 1848358 cri.go:89] found id: ""
	I1216 02:58:36.560090 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.560097 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:36.560105 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:36.560116 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:36.618652 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:36.618670 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:36.635627 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:36.635643 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:36.704527 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:36.695687   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.696277   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698043   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698600   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.700190   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:36.695687   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.696277   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698043   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698600   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.700190   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:36.704537 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:36.704550 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:36.767179 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:36.767199 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:39.295686 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:39.305848 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:39.305909 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:39.329771 1848358 cri.go:89] found id: ""
	I1216 02:58:39.329785 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.329792 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:39.329797 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:39.329857 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:39.354814 1848358 cri.go:89] found id: ""
	I1216 02:58:39.354829 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.354836 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:39.354841 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:39.354900 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:39.380095 1848358 cri.go:89] found id: ""
	I1216 02:58:39.380110 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.380117 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:39.380122 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:39.380182 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:39.404438 1848358 cri.go:89] found id: ""
	I1216 02:58:39.404453 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.404460 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:39.404465 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:39.404526 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:39.432615 1848358 cri.go:89] found id: ""
	I1216 02:58:39.432630 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.432636 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:39.432644 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:39.432709 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:39.456879 1848358 cri.go:89] found id: ""
	I1216 02:58:39.456893 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.456900 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:39.456905 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:39.456966 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:39.481400 1848358 cri.go:89] found id: ""
	I1216 02:58:39.481415 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.481421 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:39.481430 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:39.481441 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:39.540413 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:39.540433 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:39.558600 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:39.558618 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:39.623191 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:39.614791   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.615619   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617365   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617686   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.619229   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:39.614791   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.615619   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617365   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617686   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.619229   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:39.623201 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:39.623212 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:39.685663 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:39.685683 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:42.212532 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:42.242820 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:42.242893 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:42.277407 1848358 cri.go:89] found id: ""
	I1216 02:58:42.277427 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.277435 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:42.277441 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:42.277513 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:42.313862 1848358 cri.go:89] found id: ""
	I1216 02:58:42.313877 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.313893 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:42.313898 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:42.313963 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:42.345979 1848358 cri.go:89] found id: ""
	I1216 02:58:42.345995 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.346003 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:42.346009 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:42.346075 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:42.372530 1848358 cri.go:89] found id: ""
	I1216 02:58:42.372545 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.372552 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:42.372558 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:42.372622 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:42.400807 1848358 cri.go:89] found id: ""
	I1216 02:58:42.400821 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.400829 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:42.400834 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:42.400901 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:42.426053 1848358 cri.go:89] found id: ""
	I1216 02:58:42.426067 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.426074 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:42.426079 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:42.426137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:42.453460 1848358 cri.go:89] found id: ""
	I1216 02:58:42.453475 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.453482 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:42.453490 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:42.453500 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:42.509219 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:42.509237 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:42.526995 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:42.527011 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:42.589697 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:42.581361   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.581873   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.583507   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.584135   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.585772   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:42.581361   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.581873   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.583507   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.584135   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.585772   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:42.589706 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:42.589723 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:42.655306 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:42.655326 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:45.183328 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:45.217035 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:45.217117 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:45.257225 1848358 cri.go:89] found id: ""
	I1216 02:58:45.257247 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.257258 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:45.257264 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:45.257334 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:45.304389 1848358 cri.go:89] found id: ""
	I1216 02:58:45.304407 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.304416 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:45.304423 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:45.304509 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:45.334339 1848358 cri.go:89] found id: ""
	I1216 02:58:45.334354 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.334362 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:45.334367 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:45.334435 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:45.360176 1848358 cri.go:89] found id: ""
	I1216 02:58:45.360190 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.360198 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:45.360203 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:45.360263 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:45.384648 1848358 cri.go:89] found id: ""
	I1216 02:58:45.384663 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.384669 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:45.384678 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:45.384738 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:45.411115 1848358 cri.go:89] found id: ""
	I1216 02:58:45.411131 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.411138 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:45.411144 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:45.411218 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:45.437746 1848358 cri.go:89] found id: ""
	I1216 02:58:45.437761 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.437768 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:45.437776 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:45.437797 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:45.500791 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:45.500811 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:45.530882 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:45.530899 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:45.588591 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:45.588609 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:45.605872 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:45.605900 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:45.673187 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:45.663658   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.664990   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.665900   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.667592   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.668146   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:45.663658   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.664990   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.665900   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.667592   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.668146   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:48.173453 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:48.186360 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:48.186425 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:48.216541 1848358 cri.go:89] found id: ""
	I1216 02:58:48.216556 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.216563 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:48.216568 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:48.216633 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:48.243385 1848358 cri.go:89] found id: ""
	I1216 02:58:48.243399 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.243407 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:48.243412 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:48.243473 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:48.268738 1848358 cri.go:89] found id: ""
	I1216 02:58:48.268752 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.268759 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:48.268764 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:48.268825 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:48.293634 1848358 cri.go:89] found id: ""
	I1216 02:58:48.293649 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.293657 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:48.293662 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:48.293722 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:48.320780 1848358 cri.go:89] found id: ""
	I1216 02:58:48.320796 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.320805 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:48.320810 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:48.320872 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:48.344687 1848358 cri.go:89] found id: ""
	I1216 02:58:48.344701 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.344710 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:48.344715 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:48.344775 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:48.368368 1848358 cri.go:89] found id: ""
	I1216 02:58:48.368383 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.368390 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:48.368398 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:48.368407 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:48.424495 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:48.424515 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:48.441644 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:48.441660 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:48.506701 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:48.498238   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.498920   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.500649   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.501232   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.502941   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:48.498238   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.498920   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.500649   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.501232   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.502941   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:48.506710 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:48.506721 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:48.569962 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:48.569984 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:51.098190 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:51.108977 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:51.109048 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:51.134223 1848358 cri.go:89] found id: ""
	I1216 02:58:51.134237 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.134244 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:51.134249 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:51.134310 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:51.161239 1848358 cri.go:89] found id: ""
	I1216 02:58:51.161253 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.161261 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:51.161266 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:51.161326 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:51.202211 1848358 cri.go:89] found id: ""
	I1216 02:58:51.202225 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.202232 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:51.202237 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:51.202296 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:51.233630 1848358 cri.go:89] found id: ""
	I1216 02:58:51.233651 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.233658 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:51.233663 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:51.233728 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:51.270204 1848358 cri.go:89] found id: ""
	I1216 02:58:51.270219 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.270233 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:51.270238 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:51.270301 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:51.298689 1848358 cri.go:89] found id: ""
	I1216 02:58:51.298705 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.298716 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:51.298722 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:51.298799 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:51.323107 1848358 cri.go:89] found id: ""
	I1216 02:58:51.323126 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.323133 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:51.323140 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:51.323150 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:51.386665 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:51.386693 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:51.404372 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:51.404391 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:51.469512 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:51.460793   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.461262   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.462922   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.463490   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.465130   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:51.460793   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.461262   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.462922   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.463490   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.465130   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:51.469532 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:51.469554 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:51.535704 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:51.535725 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:54.065223 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:54.077244 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:54.077307 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:54.106090 1848358 cri.go:89] found id: ""
	I1216 02:58:54.106103 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.106110 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:54.106115 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:54.106177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:54.131805 1848358 cri.go:89] found id: ""
	I1216 02:58:54.131819 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.131833 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:54.131838 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:54.131899 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:54.156816 1848358 cri.go:89] found id: ""
	I1216 02:58:54.156829 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.156837 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:54.156842 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:54.156901 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:54.181654 1848358 cri.go:89] found id: ""
	I1216 02:58:54.181669 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.181693 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:54.181698 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:54.181765 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:54.219797 1848358 cri.go:89] found id: ""
	I1216 02:58:54.219812 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.219819 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:54.219833 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:54.219910 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:54.251176 1848358 cri.go:89] found id: ""
	I1216 02:58:54.251190 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.251197 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:54.251202 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:54.251265 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:54.275716 1848358 cri.go:89] found id: ""
	I1216 02:58:54.275731 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.275739 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:54.275747 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:54.275758 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:54.338395 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:54.330425   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.330860   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332372   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332678   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.334183   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:54.330425   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.330860   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332372   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332678   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.334183   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:54.338408 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:54.338429 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:54.401729 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:54.401749 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:54.429361 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:54.429376 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:54.489525 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:54.489545 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:57.006993 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:57.017732 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:57.017792 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:57.042221 1848358 cri.go:89] found id: ""
	I1216 02:58:57.042235 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.042242 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:57.042248 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:57.042316 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:57.069364 1848358 cri.go:89] found id: ""
	I1216 02:58:57.069378 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.069385 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:57.069390 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:57.069450 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:57.093795 1848358 cri.go:89] found id: ""
	I1216 02:58:57.093808 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.093815 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:57.093820 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:57.093881 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:57.118148 1848358 cri.go:89] found id: ""
	I1216 02:58:57.118161 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.118168 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:57.118177 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:57.118235 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:57.142161 1848358 cri.go:89] found id: ""
	I1216 02:58:57.142175 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.142182 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:57.142187 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:57.142247 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:57.169165 1848358 cri.go:89] found id: ""
	I1216 02:58:57.169178 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.169186 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:57.169191 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:57.169256 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:57.200840 1848358 cri.go:89] found id: ""
	I1216 02:58:57.200855 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.200862 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:57.200870 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:57.200881 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:57.260426 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:57.260444 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:57.285637 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:57.285654 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:57.350704 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:57.342168   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.342999   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.344857   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.345195   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.346755   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:57.342168   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.342999   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.344857   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.345195   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.346755   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:57.350714 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:57.350727 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:57.413587 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:57.413606 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:59.944007 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:59.954621 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:59.954685 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:59.979450 1848358 cri.go:89] found id: ""
	I1216 02:58:59.979466 1848358 logs.go:282] 0 containers: []
	W1216 02:58:59.979474 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:59.979479 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:59.979543 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:00.040218 1848358 cri.go:89] found id: ""
	I1216 02:59:00.040237 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.040245 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:00.040251 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:00.040325 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:00.225643 1848358 cri.go:89] found id: ""
	I1216 02:59:00.225659 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.225666 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:00.225679 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:00.225749 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:00.292916 1848358 cri.go:89] found id: ""
	I1216 02:59:00.292933 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.292941 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:00.292947 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:00.293016 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:00.327359 1848358 cri.go:89] found id: ""
	I1216 02:59:00.327375 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.327383 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:00.327389 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:00.327463 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:00.362091 1848358 cri.go:89] found id: ""
	I1216 02:59:00.362107 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.362116 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:00.362121 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:00.362205 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:00.392615 1848358 cri.go:89] found id: ""
	I1216 02:59:00.392648 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.392656 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:00.392665 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:00.392677 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:00.411628 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:00.411646 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:00.485425 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:00.476589   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.477159   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.478750   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.479401   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.480376   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:00.476589   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.477159   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.478750   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.479401   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.480376   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:00.485435 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:00.485446 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:00.548759 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:00.548779 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:00.579219 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:00.579235 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:03.138643 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:03.151350 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:03.151414 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:03.177456 1848358 cri.go:89] found id: ""
	I1216 02:59:03.177480 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.177489 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:03.177494 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:03.177576 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:03.209025 1848358 cri.go:89] found id: ""
	I1216 02:59:03.209054 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.209063 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:03.209068 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:03.209142 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:03.245557 1848358 cri.go:89] found id: ""
	I1216 02:59:03.245571 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.245578 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:03.245583 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:03.245651 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:03.273887 1848358 cri.go:89] found id: ""
	I1216 02:59:03.273902 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.273909 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:03.273914 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:03.273980 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:03.299955 1848358 cri.go:89] found id: ""
	I1216 02:59:03.299970 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.299977 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:03.299987 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:03.300050 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:03.325891 1848358 cri.go:89] found id: ""
	I1216 02:59:03.325906 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.325913 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:03.325918 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:03.325977 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:03.353059 1848358 cri.go:89] found id: ""
	I1216 02:59:03.353073 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.353080 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:03.353088 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:03.353101 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:03.409018 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:03.409040 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:03.427124 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:03.427141 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:03.498219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:03.489642   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.490076   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.491637   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.492014   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.493527   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:03.489642   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.490076   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.491637   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.492014   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.493527   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:03.498236 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:03.498250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:03.563005 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:03.563031 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:06.091678 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:06.102426 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:06.102489 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:06.127426 1848358 cri.go:89] found id: ""
	I1216 02:59:06.127439 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.127446 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:06.127452 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:06.127511 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:06.152255 1848358 cri.go:89] found id: ""
	I1216 02:59:06.152270 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.152277 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:06.152282 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:06.152344 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:06.181806 1848358 cri.go:89] found id: ""
	I1216 02:59:06.181832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.181840 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:06.181846 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:06.181909 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:06.211543 1848358 cri.go:89] found id: ""
	I1216 02:59:06.211558 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.211565 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:06.211576 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:06.211638 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:06.239433 1848358 cri.go:89] found id: ""
	I1216 02:59:06.239448 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.239454 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:06.239460 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:06.239521 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:06.265180 1848358 cri.go:89] found id: ""
	I1216 02:59:06.265199 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.265206 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:06.265212 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:06.265273 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:06.288594 1848358 cri.go:89] found id: ""
	I1216 02:59:06.288608 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.288615 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:06.288622 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:06.288633 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:06.347416 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:06.347440 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:06.365120 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:06.365137 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:06.429753 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:06.422151   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.422683   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424172   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424494   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.425949   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:06.422151   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.422683   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424172   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424494   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.425949   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:06.429762 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:06.429772 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:06.491187 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:06.491205 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:09.021976 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:09.032138 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:09.032199 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:09.056495 1848358 cri.go:89] found id: ""
	I1216 02:59:09.056509 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.056517 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:09.056522 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:09.056579 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:09.085249 1848358 cri.go:89] found id: ""
	I1216 02:59:09.085263 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.085269 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:09.085275 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:09.085336 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:09.109270 1848358 cri.go:89] found id: ""
	I1216 02:59:09.109284 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.109291 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:09.109296 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:09.109365 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:09.134217 1848358 cri.go:89] found id: ""
	I1216 02:59:09.134231 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.134238 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:09.134243 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:09.134305 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:09.158656 1848358 cri.go:89] found id: ""
	I1216 02:59:09.158670 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.158677 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:09.158682 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:09.158749 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:09.190922 1848358 cri.go:89] found id: ""
	I1216 02:59:09.190937 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.190944 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:09.190949 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:09.191020 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:09.231605 1848358 cri.go:89] found id: ""
	I1216 02:59:09.231619 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.231633 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:09.231642 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:09.231652 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:09.293613 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:09.293633 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:09.310949 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:09.310966 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:09.378806 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:09.369691   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.370608   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372360   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372849   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.374328   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:09.369691   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.370608   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372360   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372849   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.374328   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:09.378816 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:09.378827 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:09.440510 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:09.440528 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:11.972007 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:11.982340 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:11.982402 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:12.014868 1848358 cri.go:89] found id: ""
	I1216 02:59:12.014883 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.014890 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:12.014895 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:12.014969 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:12.040987 1848358 cri.go:89] found id: ""
	I1216 02:59:12.041002 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.041008 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:12.041013 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:12.041090 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:12.065526 1848358 cri.go:89] found id: ""
	I1216 02:59:12.065540 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.065561 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:12.065566 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:12.065635 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:12.093806 1848358 cri.go:89] found id: ""
	I1216 02:59:12.093833 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.093841 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:12.093849 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:12.093921 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:12.121567 1848358 cri.go:89] found id: ""
	I1216 02:59:12.121595 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.121602 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:12.121607 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:12.121677 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:12.144869 1848358 cri.go:89] found id: ""
	I1216 02:59:12.144883 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.144890 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:12.144895 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:12.144955 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:12.168723 1848358 cri.go:89] found id: ""
	I1216 02:59:12.168737 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.168744 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:12.168752 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:12.168769 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:12.185531 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:12.185547 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:12.264487 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:12.255585   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.256344   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258045   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258783   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.260488   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:12.255585   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.256344   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258045   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258783   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.260488   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:12.264497 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:12.264508 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:12.326049 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:12.326068 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:12.353200 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:12.353216 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:14.910970 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:14.924577 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:14.924643 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:14.953399 1848358 cri.go:89] found id: ""
	I1216 02:59:14.953413 1848358 logs.go:282] 0 containers: []
	W1216 02:59:14.953420 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:14.953432 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:14.953495 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:14.978792 1848358 cri.go:89] found id: ""
	I1216 02:59:14.978806 1848358 logs.go:282] 0 containers: []
	W1216 02:59:14.978815 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:14.978821 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:14.978880 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:15.008511 1848358 cri.go:89] found id: ""
	I1216 02:59:15.008528 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.008536 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:15.008542 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:15.008624 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:15.053197 1848358 cri.go:89] found id: ""
	I1216 02:59:15.053213 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.053220 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:15.053226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:15.053293 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:15.082542 1848358 cri.go:89] found id: ""
	I1216 02:59:15.082557 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.082564 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:15.082570 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:15.082634 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:15.109527 1848358 cri.go:89] found id: ""
	I1216 02:59:15.109542 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.109550 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:15.109556 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:15.109634 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:15.137809 1848358 cri.go:89] found id: ""
	I1216 02:59:15.137823 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.137830 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:15.137838 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:15.137849 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:15.211501 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:15.202592   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.203475   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205236   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205549   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.207125   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:15.202592   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.203475   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205236   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205549   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.207125   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:15.211511 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:15.211523 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:15.285555 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:15.285576 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:15.314442 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:15.314458 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:15.370796 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:15.370818 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:17.889239 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:17.899171 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:17.899236 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:17.924099 1848358 cri.go:89] found id: ""
	I1216 02:59:17.924113 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.924121 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:17.924126 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:17.924187 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:17.950817 1848358 cri.go:89] found id: ""
	I1216 02:59:17.950832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.950838 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:17.950843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:17.950903 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:17.976899 1848358 cri.go:89] found id: ""
	I1216 02:59:17.976913 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.976920 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:17.976925 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:17.976987 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:18.003139 1848358 cri.go:89] found id: ""
	I1216 02:59:18.003156 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.003164 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:18.003169 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:18.003244 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:18.032644 1848358 cri.go:89] found id: ""
	I1216 02:59:18.032659 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.032666 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:18.032671 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:18.032740 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:18.058880 1848358 cri.go:89] found id: ""
	I1216 02:59:18.058895 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.058906 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:18.058915 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:18.058988 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:18.084275 1848358 cri.go:89] found id: ""
	I1216 02:59:18.084290 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.084298 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:18.084306 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:18.084318 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:18.146637 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:18.146665 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:18.164002 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:18.164022 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:18.241086 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:18.231204   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.232053   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.233635   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.234184   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.235838   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:18.231204   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.232053   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.233635   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.234184   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.235838   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:18.241097 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:18.241110 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:18.306777 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:18.306796 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:20.840754 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:20.850885 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:20.850942 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:20.880985 1848358 cri.go:89] found id: ""
	I1216 02:59:20.881000 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.881007 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:20.881012 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:20.881071 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:20.904789 1848358 cri.go:89] found id: ""
	I1216 02:59:20.904803 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.904810 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:20.904815 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:20.904873 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:20.929350 1848358 cri.go:89] found id: ""
	I1216 02:59:20.929362 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.929370 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:20.929381 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:20.929438 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:20.953473 1848358 cri.go:89] found id: ""
	I1216 02:59:20.953487 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.953493 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:20.953499 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:20.953558 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:20.977718 1848358 cri.go:89] found id: ""
	I1216 02:59:20.977731 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.977738 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:20.977743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:20.977800 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:21.001640 1848358 cri.go:89] found id: ""
	I1216 02:59:21.001657 1848358 logs.go:282] 0 containers: []
	W1216 02:59:21.001664 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:21.001669 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:21.001752 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:21.030827 1848358 cri.go:89] found id: ""
	I1216 02:59:21.030840 1848358 logs.go:282] 0 containers: []
	W1216 02:59:21.030847 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:21.030855 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:21.030865 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:21.086683 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:21.086703 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:21.106615 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:21.106638 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:21.196393 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:21.180051   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.181461   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.182376   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186322   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186918   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:21.180051   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.181461   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.182376   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186322   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186918   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:21.196410 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:21.196420 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:21.259711 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:21.259730 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:23.788985 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:23.801081 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:23.801153 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:23.831711 1848358 cri.go:89] found id: ""
	I1216 02:59:23.831732 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.831740 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:23.831745 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:23.831812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:23.857025 1848358 cri.go:89] found id: ""
	I1216 02:59:23.857040 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.857047 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:23.857052 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:23.857115 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:23.885653 1848358 cri.go:89] found id: ""
	I1216 02:59:23.885667 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.885674 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:23.885679 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:23.885739 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:23.912974 1848358 cri.go:89] found id: ""
	I1216 02:59:23.912987 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.912996 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:23.913001 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:23.913062 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:23.936892 1848358 cri.go:89] found id: ""
	I1216 02:59:23.936906 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.936914 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:23.936919 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:23.936978 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:23.959826 1848358 cri.go:89] found id: ""
	I1216 02:59:23.959841 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.959848 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:23.959853 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:23.959912 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:23.987747 1848358 cri.go:89] found id: ""
	I1216 02:59:23.987760 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.987767 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:23.987775 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:23.987785 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:24.043435 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:24.043453 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:24.060830 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:24.060848 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:24.129870 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:24.121071   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.121882   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.123511   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.124023   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.125643   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:24.121071   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.121882   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.123511   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.124023   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.125643   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:24.129882 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:24.129893 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:24.192043 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:24.192064 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:26.722933 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:26.733462 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:26.733528 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:26.757094 1848358 cri.go:89] found id: ""
	I1216 02:59:26.757109 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.757115 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:26.757121 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:26.757190 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:26.785265 1848358 cri.go:89] found id: ""
	I1216 02:59:26.785279 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.785286 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:26.785291 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:26.785348 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:26.809734 1848358 cri.go:89] found id: ""
	I1216 02:59:26.809748 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.809755 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:26.809760 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:26.809823 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:26.833900 1848358 cri.go:89] found id: ""
	I1216 02:59:26.833914 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.833921 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:26.833926 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:26.833983 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:26.858364 1848358 cri.go:89] found id: ""
	I1216 02:59:26.858381 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.858388 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:26.858392 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:26.858476 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:26.884221 1848358 cri.go:89] found id: ""
	I1216 02:59:26.884235 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.884242 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:26.884247 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:26.884306 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:26.909747 1848358 cri.go:89] found id: ""
	I1216 02:59:26.909761 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.909768 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:26.909776 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:26.909785 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:26.965217 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:26.965237 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:26.982549 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:26.982573 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:27.049273 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:27.041323   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.041704   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043346   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043928   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.045499   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:27.041323   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.041704   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043346   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043928   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.045499   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:27.049282 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:27.049293 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:27.112656 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:27.112677 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:29.642709 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:29.652965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:29.653051 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:29.681994 1848358 cri.go:89] found id: ""
	I1216 02:59:29.682008 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.682030 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:29.682037 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:29.682106 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:29.710335 1848358 cri.go:89] found id: ""
	I1216 02:59:29.710350 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.710357 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:29.710363 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:29.710454 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:29.737846 1848358 cri.go:89] found id: ""
	I1216 02:59:29.737861 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.737868 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:29.737873 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:29.737943 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:29.763917 1848358 cri.go:89] found id: ""
	I1216 02:59:29.763931 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.763938 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:29.763944 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:29.764015 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:29.788324 1848358 cri.go:89] found id: ""
	I1216 02:59:29.788338 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.788345 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:29.788351 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:29.788409 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:29.812477 1848358 cri.go:89] found id: ""
	I1216 02:59:29.812490 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.812497 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:29.812502 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:29.812561 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:29.840464 1848358 cri.go:89] found id: ""
	I1216 02:59:29.840479 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.840486 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:29.840495 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:29.840509 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:29.905495 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:29.897197   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.898006   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899547   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899856   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.901335   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:29.897197   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.898006   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899547   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899856   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.901335   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:29.905505 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:29.905515 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:29.967090 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:29.967110 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:29.999894 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:29.999910 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:30.095570 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:30.095596 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:32.614024 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:32.624941 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:32.625007 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:32.649578 1848358 cri.go:89] found id: ""
	I1216 02:59:32.649593 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.649601 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:32.649606 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:32.649665 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:32.678365 1848358 cri.go:89] found id: ""
	I1216 02:59:32.678379 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.678386 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:32.678391 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:32.678450 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:32.703205 1848358 cri.go:89] found id: ""
	I1216 02:59:32.703219 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.703226 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:32.703231 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:32.703295 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:32.727484 1848358 cri.go:89] found id: ""
	I1216 02:59:32.727499 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.727506 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:32.727511 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:32.727568 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:32.753092 1848358 cri.go:89] found id: ""
	I1216 02:59:32.753106 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.753113 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:32.753119 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:32.753178 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:32.781551 1848358 cri.go:89] found id: ""
	I1216 02:59:32.781565 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.781572 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:32.781577 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:32.781636 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:32.807153 1848358 cri.go:89] found id: ""
	I1216 02:59:32.807168 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.807176 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:32.807184 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:32.807199 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:32.863763 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:32.863782 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:32.880478 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:32.880495 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:32.950082 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:32.941362   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.942084   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.943575   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.944217   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.945946   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:32.941362   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.942084   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.943575   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.944217   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.945946   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:32.950092 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:32.950102 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:33.016099 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:33.016121 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:35.546066 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:35.557055 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:35.557115 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:35.582927 1848358 cri.go:89] found id: ""
	I1216 02:59:35.582951 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.582960 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:35.582965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:35.583033 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:35.608110 1848358 cri.go:89] found id: ""
	I1216 02:59:35.608124 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.608131 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:35.608141 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:35.608203 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:35.632465 1848358 cri.go:89] found id: ""
	I1216 02:59:35.632479 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.632485 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:35.632490 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:35.632555 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:35.661165 1848358 cri.go:89] found id: ""
	I1216 02:59:35.661179 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.661198 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:35.661204 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:35.661272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:35.686050 1848358 cri.go:89] found id: ""
	I1216 02:59:35.686064 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.686081 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:35.686087 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:35.686156 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:35.711189 1848358 cri.go:89] found id: ""
	I1216 02:59:35.711203 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.711210 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:35.711215 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:35.711276 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:35.735024 1848358 cri.go:89] found id: ""
	I1216 02:59:35.735072 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.735080 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:35.735089 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:35.735099 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:35.790017 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:35.790036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:35.807195 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:35.807212 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:35.870014 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:35.862369   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.862875   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864373   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864766   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.866205   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:35.862369   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.862875   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864373   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864766   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.866205   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:35.870024 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:35.870036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:35.933113 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:35.933134 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:38.460684 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:38.471131 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:38.471193 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:38.508160 1848358 cri.go:89] found id: ""
	I1216 02:59:38.508175 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.508183 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:38.508188 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:38.508257 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:38.540297 1848358 cri.go:89] found id: ""
	I1216 02:59:38.540312 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.540320 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:38.540324 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:38.540388 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:38.566230 1848358 cri.go:89] found id: ""
	I1216 02:59:38.566244 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.566252 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:38.566257 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:38.566321 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:38.591818 1848358 cri.go:89] found id: ""
	I1216 02:59:38.591832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.591839 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:38.591844 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:38.591911 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:38.618603 1848358 cri.go:89] found id: ""
	I1216 02:59:38.618617 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.618624 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:38.618629 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:38.618689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:38.643310 1848358 cri.go:89] found id: ""
	I1216 02:59:38.643324 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.643331 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:38.643337 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:38.643402 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:38.667065 1848358 cri.go:89] found id: ""
	I1216 02:59:38.667080 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.667087 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:38.667095 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:38.667106 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:38.699522 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:38.699540 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:38.757880 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:38.757898 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:38.774888 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:38.774903 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:38.842015 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:38.834115   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.834681   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836207   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836756   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.838251   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:38.834115   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.834681   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836207   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836756   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.838251   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:38.842025 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:38.842036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:41.405157 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:41.416379 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:41.416447 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:41.446560 1848358 cri.go:89] found id: ""
	I1216 02:59:41.446578 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.446596 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:41.446602 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:41.446675 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:41.483188 1848358 cri.go:89] found id: ""
	I1216 02:59:41.483202 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.483209 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:41.483213 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:41.483274 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:41.516110 1848358 cri.go:89] found id: ""
	I1216 02:59:41.516140 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.516147 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:41.516152 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:41.516218 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:41.540839 1848358 cri.go:89] found id: ""
	I1216 02:59:41.540853 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.540860 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:41.540866 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:41.540926 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:41.566596 1848358 cri.go:89] found id: ""
	I1216 02:59:41.566622 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.566629 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:41.566634 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:41.566706 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:41.590702 1848358 cri.go:89] found id: ""
	I1216 02:59:41.590717 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.590724 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:41.590729 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:41.590791 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:41.616252 1848358 cri.go:89] found id: ""
	I1216 02:59:41.616276 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.616283 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:41.616291 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:41.616303 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:41.645509 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:41.645525 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:41.704141 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:41.704159 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:41.721706 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:41.721725 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:41.783974 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:41.776246   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.776782   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778294   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778717   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.780191   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:41.776246   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.776782   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778294   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778717   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.780191   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:41.783984 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:41.784019 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:44.346692 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:44.357118 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:44.357181 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:44.382575 1848358 cri.go:89] found id: ""
	I1216 02:59:44.382589 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.382596 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:44.382601 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:44.382666 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:44.407349 1848358 cri.go:89] found id: ""
	I1216 02:59:44.407363 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.407370 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:44.407375 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:44.407442 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:44.438660 1848358 cri.go:89] found id: ""
	I1216 02:59:44.438674 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.438681 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:44.438693 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:44.438748 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:44.483154 1848358 cri.go:89] found id: ""
	I1216 02:59:44.483168 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.483175 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:44.483180 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:44.483239 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:44.512253 1848358 cri.go:89] found id: ""
	I1216 02:59:44.512267 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.512274 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:44.512283 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:44.512341 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:44.537396 1848358 cri.go:89] found id: ""
	I1216 02:59:44.537410 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.537427 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:44.537434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:44.537510 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:44.562261 1848358 cri.go:89] found id: ""
	I1216 02:59:44.562275 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.562283 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:44.562291 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:44.562300 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:44.630850 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:44.630877 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:44.660268 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:44.660294 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:44.721274 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:44.721294 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:44.738464 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:44.738482 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:44.804552 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:44.796057   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.796830   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798532   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798943   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.800543   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:44.796057   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.796830   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798532   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798943   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.800543   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:47.304816 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:47.315117 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:47.315178 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:47.344292 1848358 cri.go:89] found id: ""
	I1216 02:59:47.344306 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.344314 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:47.344319 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:47.344381 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:47.367920 1848358 cri.go:89] found id: ""
	I1216 02:59:47.367934 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.367942 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:47.367947 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:47.368017 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:47.392383 1848358 cri.go:89] found id: ""
	I1216 02:59:47.392397 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.392404 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:47.392409 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:47.392473 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:47.415620 1848358 cri.go:89] found id: ""
	I1216 02:59:47.415634 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.415641 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:47.415646 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:47.415703 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:47.454281 1848358 cri.go:89] found id: ""
	I1216 02:59:47.454295 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.454302 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:47.454308 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:47.454367 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:47.487808 1848358 cri.go:89] found id: ""
	I1216 02:59:47.487822 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.487829 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:47.487834 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:47.487893 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:47.515510 1848358 cri.go:89] found id: ""
	I1216 02:59:47.515523 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.515531 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:47.515538 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:47.515551 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:47.582935 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:47.574325   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.575137   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.576881   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.577372   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.578856   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:47.574325   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.575137   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.576881   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.577372   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.578856   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:47.582951 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:47.582963 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:47.644716 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:47.644735 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:47.673055 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:47.673071 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:47.729448 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:47.729467 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:50.247207 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:50.257829 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:50.257894 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:50.282406 1848358 cri.go:89] found id: ""
	I1216 02:59:50.282422 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.282429 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:50.282435 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:50.282497 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:50.307428 1848358 cri.go:89] found id: ""
	I1216 02:59:50.307442 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.307450 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:50.307455 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:50.307514 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:50.332093 1848358 cri.go:89] found id: ""
	I1216 02:59:50.332107 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.332114 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:50.332120 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:50.332179 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:50.357137 1848358 cri.go:89] found id: ""
	I1216 02:59:50.357151 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.357158 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:50.357163 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:50.357227 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:50.380923 1848358 cri.go:89] found id: ""
	I1216 02:59:50.380938 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.380945 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:50.380950 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:50.381008 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:50.404673 1848358 cri.go:89] found id: ""
	I1216 02:59:50.404687 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.404695 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:50.404700 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:50.404762 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:50.428594 1848358 cri.go:89] found id: ""
	I1216 02:59:50.428609 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.428616 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:50.428624 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:50.428634 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:50.511977 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:50.503194   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.503744   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505371   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505827   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.507476   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:50.503194   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.503744   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505371   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505827   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.507476   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:50.511987 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:50.511998 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:50.575372 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:50.575394 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:50.603193 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:50.603215 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:50.660351 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:50.660370 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:53.177329 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:53.187812 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:53.187876 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:53.212765 1848358 cri.go:89] found id: ""
	I1216 02:59:53.212780 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.212787 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:53.212792 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:53.212855 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:53.237571 1848358 cri.go:89] found id: ""
	I1216 02:59:53.237584 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.237591 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:53.237596 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:53.237657 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:53.261989 1848358 cri.go:89] found id: ""
	I1216 02:59:53.262003 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.262010 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:53.262015 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:53.262077 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:53.291843 1848358 cri.go:89] found id: ""
	I1216 02:59:53.291857 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.291864 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:53.291869 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:53.291929 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:53.316569 1848358 cri.go:89] found id: ""
	I1216 02:59:53.316583 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.316590 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:53.316595 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:53.316655 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:53.340200 1848358 cri.go:89] found id: ""
	I1216 02:59:53.340214 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.340221 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:53.340226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:53.340284 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:53.364767 1848358 cri.go:89] found id: ""
	I1216 02:59:53.364782 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.364789 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:53.364796 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:53.364806 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:53.423540 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:53.423559 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:53.440975 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:53.440990 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:53.518181 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:53.509741   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.510408   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512145   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512724   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.514366   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:53.509741   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.510408   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512145   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512724   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.514366   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:53.518190 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:53.518201 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:53.580231 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:53.580250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:56.109099 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:56.119430 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:56.119493 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:56.144050 1848358 cri.go:89] found id: ""
	I1216 02:59:56.144064 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.144072 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:56.144077 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:56.144137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:56.168768 1848358 cri.go:89] found id: ""
	I1216 02:59:56.168783 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.168790 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:56.168794 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:56.168858 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:56.193611 1848358 cri.go:89] found id: ""
	I1216 02:59:56.193625 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.193633 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:56.193637 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:56.193694 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:56.218383 1848358 cri.go:89] found id: ""
	I1216 02:59:56.218396 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.218415 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:56.218420 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:56.218532 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:56.244850 1848358 cri.go:89] found id: ""
	I1216 02:59:56.244864 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.244871 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:56.244888 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:56.244960 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:56.272142 1848358 cri.go:89] found id: ""
	I1216 02:59:56.272167 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.272174 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:56.272181 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:56.272252 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:56.296464 1848358 cri.go:89] found id: ""
	I1216 02:59:56.296478 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.296485 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:56.296493 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:56.296503 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:56.351797 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:56.351818 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:56.368635 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:56.368655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:56.433327 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:56.425076   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.425853   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.427469   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.428121   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.429570   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:56.425076   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.425853   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.427469   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.428121   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.429570   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:56.433336 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:56.433346 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:56.509361 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:56.509380 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:59.037187 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:59.047286 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:59.047351 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:59.072817 1848358 cri.go:89] found id: ""
	I1216 02:59:59.072831 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.072838 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:59.072843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:59.072914 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:59.098681 1848358 cri.go:89] found id: ""
	I1216 02:59:59.098696 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.098708 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:59.098713 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:59.098774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:59.124932 1848358 cri.go:89] found id: ""
	I1216 02:59:59.124945 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.124953 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:59.124958 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:59.125017 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:59.149561 1848358 cri.go:89] found id: ""
	I1216 02:59:59.149575 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.149581 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:59.149586 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:59.149646 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:59.174402 1848358 cri.go:89] found id: ""
	I1216 02:59:59.174417 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.174426 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:59.174431 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:59.174497 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:59.199717 1848358 cri.go:89] found id: ""
	I1216 02:59:59.199732 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.199740 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:59.199745 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:59.199812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:59.225754 1848358 cri.go:89] found id: ""
	I1216 02:59:59.225768 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.225787 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:59.225795 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:59.225806 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:59.288033 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:59.288058 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:59.316114 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:59.316130 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:59.373962 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:59.373981 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:59.390958 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:59.390978 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:59.466112 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:59.455145   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.456493   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458087   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458370   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.462047   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:59.455145   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.456493   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458087   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458370   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.462047   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:01.968417 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:01.996618 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:01.996689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:02.075342 1848358 cri.go:89] found id: ""
	I1216 03:00:02.075366 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.075373 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:02.075379 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:02.075457 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:02.107614 1848358 cri.go:89] found id: ""
	I1216 03:00:02.107629 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.107637 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:02.107646 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:02.107720 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:02.137752 1848358 cri.go:89] found id: ""
	I1216 03:00:02.137768 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.137776 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:02.137782 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:02.137853 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:02.169435 1848358 cri.go:89] found id: ""
	I1216 03:00:02.169452 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.169459 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:02.169465 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:02.169546 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:02.198391 1848358 cri.go:89] found id: ""
	I1216 03:00:02.198423 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.198431 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:02.198438 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:02.198511 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:02.227862 1848358 cri.go:89] found id: ""
	I1216 03:00:02.227877 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.227885 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:02.227891 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:02.227959 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:02.256236 1848358 cri.go:89] found id: ""
	I1216 03:00:02.256251 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.256269 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:02.256278 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:02.256290 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:02.315559 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:02.315582 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:02.334230 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:02.334248 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:02.404903 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:02.395443   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.396222   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.398711   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.399382   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.400828   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:02.395443   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.396222   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.398711   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.399382   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.400828   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:02.404912 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:02.404923 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:02.469074 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:02.469095 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:05.003993 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:05.018300 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:05.018420 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:05.047301 1848358 cri.go:89] found id: ""
	I1216 03:00:05.047316 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.047323 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:05.047335 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:05.047400 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:05.072682 1848358 cri.go:89] found id: ""
	I1216 03:00:05.072697 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.072704 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:05.072709 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:05.072770 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:05.102478 1848358 cri.go:89] found id: ""
	I1216 03:00:05.102493 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.102502 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:05.102507 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:05.102578 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:05.132728 1848358 cri.go:89] found id: ""
	I1216 03:00:05.132743 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.132750 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:05.132756 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:05.132825 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:05.158706 1848358 cri.go:89] found id: ""
	I1216 03:00:05.158721 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.158728 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:05.158733 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:05.158795 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:05.184666 1848358 cri.go:89] found id: ""
	I1216 03:00:05.184681 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.184688 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:05.184694 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:05.184756 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:05.216197 1848358 cri.go:89] found id: ""
	I1216 03:00:05.216213 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.216221 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:05.216229 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:05.216239 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:05.278419 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:05.278439 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:05.309753 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:05.309771 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:05.366862 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:05.366880 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:05.384427 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:05.384446 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:05.452157 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:05.443910   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.444698   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446307   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446727   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.448188   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:05.443910   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.444698   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446307   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446727   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.448188   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:07.952402 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:07.967145 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:07.967225 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:07.998164 1848358 cri.go:89] found id: ""
	I1216 03:00:07.998178 1848358 logs.go:282] 0 containers: []
	W1216 03:00:07.998185 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:07.998191 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:07.998251 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:08.032873 1848358 cri.go:89] found id: ""
	I1216 03:00:08.032889 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.032896 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:08.032901 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:08.032964 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:08.059832 1848358 cri.go:89] found id: ""
	I1216 03:00:08.059846 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.059854 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:08.059859 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:08.059933 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:08.087232 1848358 cri.go:89] found id: ""
	I1216 03:00:08.087246 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.087253 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:08.087258 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:08.087316 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:08.114253 1848358 cri.go:89] found id: ""
	I1216 03:00:08.114267 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.114274 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:08.114280 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:08.114343 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:08.139972 1848358 cri.go:89] found id: ""
	I1216 03:00:08.139987 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.139994 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:08.139999 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:08.140141 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:08.165613 1848358 cri.go:89] found id: ""
	I1216 03:00:08.165628 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.165637 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:08.165645 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:08.165655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:08.221696 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:08.221715 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:08.240189 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:08.240206 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:08.320945 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:08.311750   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.312401   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314217   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314799   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.316399   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:08.311750   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.312401   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314217   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314799   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.316399   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:08.320954 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:08.320964 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:08.384243 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:08.384275 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:10.913864 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:10.926998 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:10.927108 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:10.962440 1848358 cri.go:89] found id: ""
	I1216 03:00:10.962454 1848358 logs.go:282] 0 containers: []
	W1216 03:00:10.962461 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:10.962466 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:10.962526 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:11.004569 1848358 cri.go:89] found id: ""
	I1216 03:00:11.004589 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.004598 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:11.004610 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:11.005096 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:11.034401 1848358 cri.go:89] found id: ""
	I1216 03:00:11.034415 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.034429 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:11.034434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:11.034508 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:11.065292 1848358 cri.go:89] found id: ""
	I1216 03:00:11.065309 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.065317 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:11.065325 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:11.065394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:11.092043 1848358 cri.go:89] found id: ""
	I1216 03:00:11.092057 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.092065 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:11.092070 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:11.092163 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:11.121914 1848358 cri.go:89] found id: ""
	I1216 03:00:11.121929 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.121936 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:11.121942 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:11.122014 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:11.147863 1848358 cri.go:89] found id: ""
	I1216 03:00:11.147879 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.147886 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:11.147894 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:11.147906 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:11.213267 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:11.213287 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:11.231545 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:11.231561 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:11.303516 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:11.294839   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.295358   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.296660   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.297169   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.298964   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:11.294839   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.295358   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.296660   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.297169   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.298964   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:11.303525 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:11.303544 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:11.375152 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:11.375181 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:13.905997 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:13.916685 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:13.916754 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:13.946670 1848358 cri.go:89] found id: ""
	I1216 03:00:13.946698 1848358 logs.go:282] 0 containers: []
	W1216 03:00:13.946705 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:13.946711 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:13.946782 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:13.978544 1848358 cri.go:89] found id: ""
	I1216 03:00:13.978558 1848358 logs.go:282] 0 containers: []
	W1216 03:00:13.978565 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:13.978570 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:13.978630 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:14.010045 1848358 cri.go:89] found id: ""
	I1216 03:00:14.010060 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.010068 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:14.010073 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:14.010148 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:14.039695 1848358 cri.go:89] found id: ""
	I1216 03:00:14.039709 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.039717 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:14.039722 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:14.039786 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:14.065918 1848358 cri.go:89] found id: ""
	I1216 03:00:14.065932 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.065939 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:14.065944 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:14.066002 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:14.092594 1848358 cri.go:89] found id: ""
	I1216 03:00:14.092607 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.092615 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:14.092620 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:14.092684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:14.117022 1848358 cri.go:89] found id: ""
	I1216 03:00:14.117036 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.117043 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:14.117052 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:14.117063 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:14.145392 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:14.145409 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:14.201319 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:14.201338 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:14.218382 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:14.218397 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:14.286945 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:14.279281   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.279802   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281416   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281996   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.283003   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:14.279281   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.279802   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281416   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281996   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.283003   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:14.286956 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:14.286968 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:16.848830 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:16.859224 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:16.859288 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:16.900559 1848358 cri.go:89] found id: ""
	I1216 03:00:16.900573 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.900580 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:16.900586 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:16.900660 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:16.925198 1848358 cri.go:89] found id: ""
	I1216 03:00:16.925213 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.925221 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:16.925226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:16.925288 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:16.968532 1848358 cri.go:89] found id: ""
	I1216 03:00:16.968545 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.968552 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:16.968557 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:16.968620 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:17.001327 1848358 cri.go:89] found id: ""
	I1216 03:00:17.001343 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.001351 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:17.001357 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:17.001427 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:17.029828 1848358 cri.go:89] found id: ""
	I1216 03:00:17.029843 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.029850 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:17.029855 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:17.029917 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:17.055865 1848358 cri.go:89] found id: ""
	I1216 03:00:17.055880 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.055887 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:17.055892 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:17.055956 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:17.081782 1848358 cri.go:89] found id: ""
	I1216 03:00:17.081796 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.081804 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:17.081812 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:17.081823 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:17.137664 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:17.137684 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:17.155387 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:17.155413 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:17.223693 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:17.215814   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.216359   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.217875   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.218284   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.219788   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:17.215814   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.216359   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.217875   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.218284   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.219788   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:17.223704 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:17.223715 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:17.285895 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:17.285915 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:19.819792 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:19.830531 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:19.830595 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:19.855374 1848358 cri.go:89] found id: ""
	I1216 03:00:19.855388 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.855395 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:19.855400 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:19.855459 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:19.880613 1848358 cri.go:89] found id: ""
	I1216 03:00:19.880627 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.880634 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:19.880639 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:19.880701 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:19.905217 1848358 cri.go:89] found id: ""
	I1216 03:00:19.905231 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.905238 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:19.905243 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:19.905306 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:19.938230 1848358 cri.go:89] found id: ""
	I1216 03:00:19.938245 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.938252 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:19.938257 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:19.938318 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:19.972308 1848358 cri.go:89] found id: ""
	I1216 03:00:19.972322 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.972330 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:19.972335 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:19.972396 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:20.009826 1848358 cri.go:89] found id: ""
	I1216 03:00:20.009843 1848358 logs.go:282] 0 containers: []
	W1216 03:00:20.009851 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:20.009857 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:20.009931 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:20.047016 1848358 cri.go:89] found id: ""
	I1216 03:00:20.047031 1848358 logs.go:282] 0 containers: []
	W1216 03:00:20.047075 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:20.047084 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:20.047095 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:20.105420 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:20.105444 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:20.123806 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:20.123824 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:20.193387 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:20.184716   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.185705   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.187519   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.188104   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.189189   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:20.184716   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.185705   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.187519   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.188104   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.189189   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:20.193399 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:20.193410 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:20.256212 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:20.256232 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:22.788953 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:22.799143 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:22.799205 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:22.824912 1848358 cri.go:89] found id: ""
	I1216 03:00:22.824926 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.824933 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:22.824938 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:22.824999 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:22.848993 1848358 cri.go:89] found id: ""
	I1216 03:00:22.849007 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.849014 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:22.849019 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:22.849077 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:22.873445 1848358 cri.go:89] found id: ""
	I1216 03:00:22.873467 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.873476 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:22.873481 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:22.873548 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:22.898928 1848358 cri.go:89] found id: ""
	I1216 03:00:22.898952 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.898960 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:22.898965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:22.899088 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:22.924441 1848358 cri.go:89] found id: ""
	I1216 03:00:22.924455 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.924462 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:22.924471 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:22.924536 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:22.972165 1848358 cri.go:89] found id: ""
	I1216 03:00:22.972187 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.972194 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:22.972200 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:22.972272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:23.007998 1848358 cri.go:89] found id: ""
	I1216 03:00:23.008014 1848358 logs.go:282] 0 containers: []
	W1216 03:00:23.008021 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:23.008030 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:23.008041 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:23.074846 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:23.065592   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.066370   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.068447   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.069048   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.070772   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:23.065592   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.066370   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.068447   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.069048   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.070772   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:23.074856 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:23.074867 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:23.141968 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:23.141990 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:23.170755 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:23.170772 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:23.229156 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:23.229176 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:25.746547 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:25.757092 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:25.757177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:25.781744 1848358 cri.go:89] found id: ""
	I1216 03:00:25.781758 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.781765 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:25.781770 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:25.781829 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:25.810185 1848358 cri.go:89] found id: ""
	I1216 03:00:25.810200 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.810207 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:25.810212 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:25.810273 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:25.837797 1848358 cri.go:89] found id: ""
	I1216 03:00:25.837810 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.837818 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:25.837822 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:25.837881 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:25.864444 1848358 cri.go:89] found id: ""
	I1216 03:00:25.864466 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.864474 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:25.864479 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:25.864537 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:25.889170 1848358 cri.go:89] found id: ""
	I1216 03:00:25.889185 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.889192 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:25.889197 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:25.889253 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:25.913381 1848358 cri.go:89] found id: ""
	I1216 03:00:25.913396 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.913403 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:25.913409 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:25.913468 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:25.956168 1848358 cri.go:89] found id: ""
	I1216 03:00:25.956184 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.956191 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:25.956199 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:25.956209 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:25.987017 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:25.987032 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:26.056762 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:26.056783 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:26.074582 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:26.074599 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:26.142533 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:26.133438   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.134117   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.135700   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.136346   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.138045   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:26.133438   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.134117   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.135700   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.136346   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.138045   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:26.142543 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:26.142554 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:28.704757 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:28.715093 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:28.715171 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:28.756309 1848358 cri.go:89] found id: ""
	I1216 03:00:28.756339 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.756350 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:28.756355 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:28.756442 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:28.786013 1848358 cri.go:89] found id: ""
	I1216 03:00:28.786027 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.786033 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:28.786038 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:28.786099 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:28.813243 1848358 cri.go:89] found id: ""
	I1216 03:00:28.813257 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.813264 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:28.813269 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:28.813329 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:28.837627 1848358 cri.go:89] found id: ""
	I1216 03:00:28.837642 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.837649 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:28.837654 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:28.837714 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:28.862744 1848358 cri.go:89] found id: ""
	I1216 03:00:28.862768 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.862775 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:28.862780 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:28.862850 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:28.888763 1848358 cri.go:89] found id: ""
	I1216 03:00:28.888777 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.888784 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:28.888790 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:28.888851 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:28.913212 1848358 cri.go:89] found id: ""
	I1216 03:00:28.913226 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.913234 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:28.913242 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:28.913252 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:28.973937 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:28.973957 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:28.995906 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:28.995924 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:29.068971 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:29.060478   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.060883   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062455   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062780   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.064407   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:29.060478   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.060883   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062455   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062780   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.064407   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:29.068980 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:29.068994 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:29.132688 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:29.132707 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:31.666915 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:31.677125 1848358 kubeadm.go:602] duration metric: took 4m1.758576282s to restartPrimaryControlPlane
	W1216 03:00:31.677186 1848358 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:00:31.677266 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:00:32.091488 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:00:32.105369 1848358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:00:32.113490 1848358 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:00:32.113550 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:00:32.122054 1848358 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:00:32.122064 1848358 kubeadm.go:158] found existing configuration files:
	
	I1216 03:00:32.122120 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 03:00:32.130622 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:00:32.130682 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:00:32.138437 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 03:00:32.146797 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:00:32.146863 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:00:32.155178 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 03:00:32.163734 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:00:32.163795 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:00:32.171993 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 03:00:32.180028 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:00:32.180097 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:00:32.188091 1848358 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:00:32.228785 1848358 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:00:32.228977 1848358 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:00:32.306472 1848358 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:00:32.306542 1848358 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:00:32.306577 1848358 kubeadm.go:319] OS: Linux
	I1216 03:00:32.306630 1848358 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:00:32.306684 1848358 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:00:32.306730 1848358 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:00:32.306783 1848358 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:00:32.306837 1848358 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:00:32.306884 1848358 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:00:32.306934 1848358 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:00:32.306987 1848358 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:00:32.307033 1848358 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:00:32.370232 1848358 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:00:32.370342 1848358 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:00:32.370445 1848358 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:00:32.376940 1848358 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:00:32.380870 1848358 out.go:252]   - Generating certificates and keys ...
	I1216 03:00:32.380973 1848358 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:00:32.381073 1848358 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:00:32.381166 1848358 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:00:32.381227 1848358 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:00:32.381296 1848358 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:00:32.381349 1848358 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:00:32.381411 1848358 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:00:32.381496 1848358 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:00:32.381600 1848358 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:00:32.381683 1848358 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:00:32.381723 1848358 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:00:32.381783 1848358 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:00:32.587867 1848358 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:00:32.728887 1848358 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:00:33.127071 1848358 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:00:33.632583 1848358 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:00:33.851925 1848358 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:00:33.852650 1848358 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:00:33.855273 1848358 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:00:33.858613 1848358 out.go:252]   - Booting up control plane ...
	I1216 03:00:33.858712 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:00:33.858788 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:00:33.858854 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:00:33.878797 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:00:33.879802 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:00:33.887340 1848358 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:00:33.887615 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:00:33.887656 1848358 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:00:34.023686 1848358 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:00:34.027990 1848358 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:04:34.028846 1848358 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005338087s
	I1216 03:04:34.028875 1848358 kubeadm.go:319] 
	I1216 03:04:34.028931 1848358 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:04:34.028963 1848358 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:04:34.029067 1848358 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:04:34.029071 1848358 kubeadm.go:319] 
	I1216 03:04:34.029175 1848358 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:04:34.029206 1848358 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:04:34.029236 1848358 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:04:34.029239 1848358 kubeadm.go:319] 
	I1216 03:04:34.033654 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:04:34.034083 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:04:34.034191 1848358 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:34.034426 1848358 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 03:04:34.034431 1848358 kubeadm.go:319] 
	I1216 03:04:34.034499 1848358 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 03:04:34.034613 1848358 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005338087s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 03:04:34.034714 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:04:34.442103 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:34.455899 1848358 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:34.455954 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:34.464166 1848358 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:34.464176 1848358 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:34.464227 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 03:04:34.472141 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:34.472197 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:34.479703 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 03:04:34.487496 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:34.487553 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:34.495305 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 03:04:34.504218 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:34.504277 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:34.512085 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 03:04:34.520037 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:34.520091 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:34.527590 1848358 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:34.569546 1848358 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:04:34.569597 1848358 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:34.648580 1848358 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:34.648645 1848358 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:04:34.648680 1848358 kubeadm.go:319] OS: Linux
	I1216 03:04:34.648724 1848358 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:34.648775 1848358 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:04:34.648847 1848358 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:34.648894 1848358 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:34.648941 1848358 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:34.648988 1848358 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:34.649031 1848358 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:34.649078 1848358 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:34.649123 1848358 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:04:34.718553 1848358 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:34.718667 1848358 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:34.718765 1848358 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:04:34.725198 1848358 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:04:34.730521 1848358 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:34.730604 1848358 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:34.730670 1848358 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:34.730745 1848358 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:04:34.730804 1848358 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:04:34.730873 1848358 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:04:34.730926 1848358 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:04:34.730988 1848358 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:04:34.731077 1848358 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:04:34.731151 1848358 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:04:34.731222 1848358 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:04:34.731258 1848358 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:04:34.731313 1848358 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:34.775823 1848358 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:35.226979 1848358 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:04:35.500835 1848358 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:35.803186 1848358 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:35.922858 1848358 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:35.923646 1848358 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:35.926392 1848358 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:35.929487 1848358 out.go:252]   - Booting up control plane ...
	I1216 03:04:35.929587 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:35.929670 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:35.930420 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:35.952397 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:35.952501 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:04:35.960726 1848358 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:04:35.961037 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:35.961210 1848358 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:36.110987 1848358 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:04:36.111155 1848358 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:08:36.111000 1848358 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000244075s
	I1216 03:08:36.111025 1848358 kubeadm.go:319] 
	I1216 03:08:36.111095 1848358 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:08:36.111126 1848358 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:08:36.111231 1848358 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:08:36.111235 1848358 kubeadm.go:319] 
	I1216 03:08:36.111337 1848358 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:08:36.111368 1848358 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:08:36.111397 1848358 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:08:36.111401 1848358 kubeadm.go:319] 
	I1216 03:08:36.115184 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:08:36.115598 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:08:36.115704 1848358 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:08:36.115939 1848358 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 03:08:36.115944 1848358 kubeadm.go:319] 
	I1216 03:08:36.116012 1848358 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 03:08:36.116067 1848358 kubeadm.go:403] duration metric: took 12m6.232765178s to StartCluster
	I1216 03:08:36.116112 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:08:36.116177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:08:36.140414 1848358 cri.go:89] found id: ""
	I1216 03:08:36.140430 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.140437 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:08:36.140442 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:08:36.140504 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:08:36.164577 1848358 cri.go:89] found id: ""
	I1216 03:08:36.164590 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.164598 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:08:36.164604 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:08:36.164663 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:08:36.188307 1848358 cri.go:89] found id: ""
	I1216 03:08:36.188321 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.188328 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:08:36.188333 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:08:36.188394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:08:36.213037 1848358 cri.go:89] found id: ""
	I1216 03:08:36.213050 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.213057 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:08:36.213062 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:08:36.213121 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:08:36.239675 1848358 cri.go:89] found id: ""
	I1216 03:08:36.239690 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.239698 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:08:36.239704 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:08:36.239762 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:08:36.262932 1848358 cri.go:89] found id: ""
	I1216 03:08:36.262947 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.262955 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:08:36.262960 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:08:36.263018 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:08:36.288318 1848358 cri.go:89] found id: ""
	I1216 03:08:36.288332 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.288340 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:08:36.288349 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:08:36.288358 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:08:36.350247 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:08:36.350267 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:08:36.380644 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:08:36.380660 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:08:36.436449 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:08:36.436466 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:08:36.457199 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:08:36.457222 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:08:36.526010 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:08:36.517899   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.518716   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520309   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520628   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.522143   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:08:36.517899   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.518716   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520309   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520628   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.522143   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1216 03:08:36.526029 1848358 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 03:08:36.526065 1848358 out.go:285] * 
	W1216 03:08:36.526124 1848358 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:08:36.526137 1848358 out.go:285] * 
	W1216 03:08:36.528271 1848358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:08:36.533177 1848358 out.go:203] 
	W1216 03:08:36.537050 1848358 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:08:36.537112 1848358 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 03:08:36.537136 1848358 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 03:08:36.540537 1848358 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418983774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418998239Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419036154Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419097175Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419108202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419119509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419128805Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419140062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419155980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419187823Z" level=info msg="Connect containerd service"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419497668Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.420076931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439480285Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439840672Z" level=info msg="Start recovering state"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439686821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.443248018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513022632Z" level=info msg="Start event monitor"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513204659Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513279259Z" level=info msg="Start streaming server"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513342856Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513405935Z" level=info msg="runtime interface starting up..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513471920Z" level=info msg="starting plugins..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513539119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:56:28 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.516797790Z" level=info msg="containerd successfully booted in 0.120064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:11:07.410019   23247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:07.410740   23247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:07.412207   23247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:07.412685   23247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:07.414179   23247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 03:11:07 up  8:53,  0 user,  load average: 1.01, 0.44, 0.54
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 03:11:04 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:04 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 518.
	Dec 16 03:11:04 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:04 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:05 functional-389759 kubelet[23081]: E1216 03:11:05.002299   23081 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:05 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:05 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:05 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 519.
	Dec 16 03:11:05 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:05 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:05 functional-389759 kubelet[23110]: E1216 03:11:05.746153   23110 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:05 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:05 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:06 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 520.
	Dec 16 03:11:06 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:06 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:06 functional-389759 kubelet[23153]: E1216 03:11:06.486526   23153 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:06 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:06 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:07 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 521.
	Dec 16 03:11:07 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:07 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:07 functional-389759 kubelet[23210]: E1216 03:11:07.245022   23210 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:07 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:07 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (382.979995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-389759 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-389759 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.019716ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-389759 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-389759 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-389759 describe po hello-node-connect: exit status 1 (54.743671ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-389759 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-389759 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-389759 logs -l app=hello-node-connect: exit status 1 (63.364881ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-389759 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-389759 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-389759 describe svc hello-node-connect: exit status 1 (65.458292ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-389759 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (322.580678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-389759 cache reload                                                                                                                               │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ ssh     │ functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │ 16 Dec 25 02:56 UTC │
	│ kubectl │ functional-389759 kubectl -- --context functional-389759 get pods                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	│ start   │ -p functional-389759 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:56 UTC │                     │
	│ cp      │ functional-389759 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ config  │ functional-389759 config unset cpus                                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ config  │ functional-389759 config get cpus                                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │                     │
	│ config  │ functional-389759 config set cpus 2                                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ config  │ functional-389759 config get cpus                                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ functional-389759 ssh -n functional-389759 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ config  │ functional-389759 config unset cpus                                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ config  │ functional-389759 config get cpus                                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │                     │
	│ ssh     │ functional-389759 ssh echo hello                                                                                                                             │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ cp      │ functional-389759 cp functional-389759:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1312419370/001/cp-test.txt │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ functional-389759 ssh cat /etc/hostname                                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ ssh     │ functional-389759 ssh -n functional-389759 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ tunnel  │ functional-389759 tunnel --alsologtostderr                                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │                     │
	│ tunnel  │ functional-389759 tunnel --alsologtostderr                                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │                     │
	│ cp      │ functional-389759 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ tunnel  │ functional-389759 tunnel --alsologtostderr                                                                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │                     │
	│ ssh     │ functional-389759 ssh -n functional-389759 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:08 UTC │ 16 Dec 25 03:08 UTC │
	│ addons  │ functional-389759 addons list                                                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │ 16 Dec 25 03:10 UTC │
	│ addons  │ functional-389759 addons list -o json                                                                                                                        │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │ 16 Dec 25 03:10 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:56:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:56:25.844373 1848358 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:56:25.844466 1848358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:56:25.844470 1848358 out.go:374] Setting ErrFile to fd 2...
	I1216 02:56:25.844474 1848358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:56:25.844836 1848358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:56:25.845570 1848358 out.go:368] Setting JSON to false
	I1216 02:56:25.846389 1848358 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":31130,"bootTime":1765822656,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:56:25.846449 1848358 start.go:143] virtualization:  
	I1216 02:56:25.849867 1848358 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:56:25.854549 1848358 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:56:25.854652 1848358 notify.go:221] Checking for updates...
	I1216 02:56:25.860318 1848358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:56:25.863452 1848358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:56:25.866454 1848358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:56:25.869328 1848358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:56:25.872192 1848358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:56:25.875771 1848358 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:56:25.875865 1848358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:56:25.910877 1848358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:56:25.910989 1848358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:56:25.979751 1848358 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 02:56:25.969640801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:56:25.979847 1848358 docker.go:319] overlay module found
	I1216 02:56:25.984585 1848358 out.go:179] * Using the docker driver based on existing profile
	I1216 02:56:25.987331 1848358 start.go:309] selected driver: docker
	I1216 02:56:25.987339 1848358 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:25.987425 1848358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:56:25.987525 1848358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:56:26.045497 1848358 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 02:56:26.035789712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:56:26.045925 1848358 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:56:26.045948 1848358 cni.go:84] Creating CNI manager for ""
	I1216 02:56:26.045996 1848358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:56:26.046044 1848358 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:26.049158 1848358 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
	I1216 02:56:26.052095 1848358 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:56:26.055176 1848358 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:56:26.058088 1848358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:56:26.058108 1848358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:56:26.058178 1848358 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:56:26.058195 1848358 cache.go:65] Caching tarball of preloaded images
	I1216 02:56:26.058305 1848358 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 02:56:26.058312 1848358 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 02:56:26.058447 1848358 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
	I1216 02:56:26.078911 1848358 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 02:56:26.078923 1848358 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 02:56:26.078944 1848358 cache.go:243] Successfully downloaded all kic artifacts
	I1216 02:56:26.078984 1848358 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:56:26.079085 1848358 start.go:364] duration metric: took 83.453µs to acquireMachinesLock for "functional-389759"
	I1216 02:56:26.079107 1848358 start.go:96] Skipping create...Using existing machine configuration
	I1216 02:56:26.079112 1848358 fix.go:54] fixHost starting: 
	I1216 02:56:26.079431 1848358 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
	I1216 02:56:26.097178 1848358 fix.go:112] recreateIfNeeded on functional-389759: state=Running err=<nil>
	W1216 02:56:26.097205 1848358 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 02:56:26.100419 1848358 out.go:252] * Updating the running docker "functional-389759" container ...
	I1216 02:56:26.100450 1848358 machine.go:94] provisionDockerMachine start ...
	I1216 02:56:26.100545 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.118508 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.118832 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.118839 1848358 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:56:26.259148 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:56:26.259164 1848358 ubuntu.go:182] provisioning hostname "functional-389759"
	I1216 02:56:26.259234 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.277500 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.277820 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.277829 1848358 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
	I1216 02:56:26.421165 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
	
	I1216 02:56:26.421257 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.440349 1848358 main.go:143] libmachine: Using SSH client type: native
	I1216 02:56:26.440644 1848358 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1216 02:56:26.440657 1848358 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:56:26.579508 1848358 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:56:26.579533 1848358 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 02:56:26.579555 1848358 ubuntu.go:190] setting up certificates
	I1216 02:56:26.579573 1848358 provision.go:84] configureAuth start
	I1216 02:56:26.579642 1848358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:56:26.598860 1848358 provision.go:143] copyHostCerts
	I1216 02:56:26.598936 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 02:56:26.598944 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 02:56:26.599024 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 02:56:26.599152 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 02:56:26.599157 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 02:56:26.599183 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 02:56:26.599298 1848358 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 02:56:26.599302 1848358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 02:56:26.599329 1848358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 02:56:26.599373 1848358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
	I1216 02:56:26.772331 1848358 provision.go:177] copyRemoteCerts
	I1216 02:56:26.772384 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:56:26.772421 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.790833 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:26.886672 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 02:56:26.903453 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 02:56:26.920711 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 02:56:26.938516 1848358 provision.go:87] duration metric: took 358.921052ms to configureAuth
	I1216 02:56:26.938533 1848358 ubuntu.go:206] setting minikube options for container-runtime
	I1216 02:56:26.938730 1848358 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 02:56:26.938735 1848358 machine.go:97] duration metric: took 838.281264ms to provisionDockerMachine
	I1216 02:56:26.938741 1848358 start.go:293] postStartSetup for "functional-389759" (driver="docker")
	I1216 02:56:26.938751 1848358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:56:26.938797 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:56:26.938840 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:26.957601 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.062997 1848358 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:56:27.066589 1848358 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 02:56:27.066608 1848358 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 02:56:27.066618 1848358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 02:56:27.066672 1848358 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 02:56:27.066743 1848358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 02:56:27.066818 1848358 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
	I1216 02:56:27.066859 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
	I1216 02:56:27.074143 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:56:27.091762 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
	I1216 02:56:27.109760 1848358 start.go:296] duration metric: took 171.004929ms for postStartSetup
	I1216 02:56:27.109845 1848358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:56:27.109892 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.130041 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.224282 1848358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 02:56:27.229295 1848358 fix.go:56] duration metric: took 1.150175721s for fixHost
	I1216 02:56:27.229312 1848358 start.go:83] releasing machines lock for "functional-389759", held for 1.150220136s
	I1216 02:56:27.229388 1848358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
	I1216 02:56:27.246922 1848358 ssh_runner.go:195] Run: cat /version.json
	I1216 02:56:27.246974 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.247232 1848358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:56:27.247302 1848358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
	I1216 02:56:27.269086 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.280897 1848358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
	I1216 02:56:27.370924 1848358 ssh_runner.go:195] Run: systemctl --version
	I1216 02:56:27.469438 1848358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:56:27.474082 1848358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:56:27.474143 1848358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:56:27.482716 1848358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 02:56:27.482730 1848358 start.go:496] detecting cgroup driver to use...
	I1216 02:56:27.482760 1848358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 02:56:27.482821 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 02:56:27.499295 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 02:56:27.512730 1848358 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:56:27.512788 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:56:27.529084 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:56:27.542618 1848358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:56:27.669326 1848358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:56:27.809661 1848358 docker.go:234] disabling docker service ...
	I1216 02:56:27.809726 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:56:27.825238 1848358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:56:27.839007 1848358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:56:27.961490 1848358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:56:28.085730 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:56:28.099793 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:56:28.115219 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 02:56:28.124904 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 02:56:28.134481 1848358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 02:56:28.134543 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 02:56:28.143714 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:56:28.152978 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 02:56:28.161801 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 02:56:28.170944 1848358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:56:28.179475 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 02:56:28.188723 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 02:56:28.197979 1848358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 02:56:28.206949 1848358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:56:28.214520 1848358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:56:28.222338 1848358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:56:28.339529 1848358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 02:56:28.517809 1848358 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 02:56:28.517866 1848358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 02:56:28.522881 1848358 start.go:564] Will wait 60s for crictl version
	I1216 02:56:28.522937 1848358 ssh_runner.go:195] Run: which crictl
	I1216 02:56:28.526562 1848358 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 02:56:28.550167 1848358 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 02:56:28.550234 1848358 ssh_runner.go:195] Run: containerd --version
	I1216 02:56:28.570328 1848358 ssh_runner.go:195] Run: containerd --version
	I1216 02:56:28.596807 1848358 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 02:56:28.599682 1848358 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 02:56:28.616323 1848358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 02:56:28.623466 1848358 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 02:56:28.626293 1848358 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:56:28.626428 1848358 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:56:28.626509 1848358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:56:28.651243 1848358 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:56:28.651255 1848358 containerd.go:534] Images already preloaded, skipping extraction
	I1216 02:56:28.651317 1848358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:56:28.676192 1848358 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 02:56:28.676203 1848358 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:56:28.676209 1848358 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1216 02:56:28.676312 1848358 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:56:28.676373 1848358 ssh_runner.go:195] Run: sudo crictl info
	I1216 02:56:28.700239 1848358 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 02:56:28.700256 1848358 cni.go:84] Creating CNI manager for ""
	I1216 02:56:28.700264 1848358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:56:28.700272 1848358 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:56:28.700294 1848358 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:56:28.700400 1848358 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-389759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:56:28.700473 1848358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 02:56:28.708593 1848358 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:56:28.708655 1848358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:56:28.716199 1848358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 02:56:28.728994 1848358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 02:56:28.742129 1848358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1216 02:56:28.754916 1848358 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 02:56:28.758765 1848358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:56:28.878289 1848358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:56:29.187922 1848358 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
	I1216 02:56:29.187939 1848358 certs.go:195] generating shared ca certs ...
	I1216 02:56:29.187954 1848358 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:56:29.188132 1848358 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 02:56:29.188175 1848358 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 02:56:29.188182 1848358 certs.go:257] generating profile certs ...
	I1216 02:56:29.188282 1848358 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
	I1216 02:56:29.188344 1848358 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
	I1216 02:56:29.188398 1848358 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
	I1216 02:56:29.188534 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 02:56:29.188573 1848358 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 02:56:29.188580 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 02:56:29.188615 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 02:56:29.188648 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:56:29.188671 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 02:56:29.188729 1848358 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 02:56:29.189416 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:56:29.212546 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 02:56:29.235562 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:56:29.257334 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:56:29.278410 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 02:56:29.297639 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:56:29.316055 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:56:29.333992 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:56:29.351802 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 02:56:29.370197 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:56:29.388624 1848358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 02:56:29.406325 1848358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:56:29.419477 1848358 ssh_runner.go:195] Run: openssl version
	I1216 02:56:29.425780 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.433488 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 02:56:29.440931 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.444594 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.444652 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 02:56:29.485312 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 02:56:29.492681 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.499838 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 02:56:29.507532 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.511555 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.511621 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 02:56:29.552382 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 02:56:29.559682 1848358 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.566808 1848358 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:56:29.574430 1848358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.578016 1848358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.578077 1848358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:56:29.619735 1848358 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:56:29.627282 1848358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:56:29.630975 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 02:56:29.674022 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 02:56:29.716546 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 02:56:29.760378 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 02:56:29.801675 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 02:56:29.842471 1848358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 02:56:29.883311 1848358 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:56:29.883412 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 02:56:29.883472 1848358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:56:29.910518 1848358 cri.go:89] found id: ""
	I1216 02:56:29.910580 1848358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:56:29.918530 1848358 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 02:56:29.918539 1848358 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 02:56:29.918590 1848358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 02:56:29.926051 1848358 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:29.926594 1848358 kubeconfig.go:125] found "functional-389759" server: "https://192.168.49.2:8441"
	I1216 02:56:29.927850 1848358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 02:56:29.937055 1848358 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 02:41:54.425829655 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 02:56:28.747941655 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 02:56:29.937066 1848358 kubeadm.go:1161] stopping kube-system containers ...
	I1216 02:56:29.937078 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1216 02:56:29.937140 1848358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:56:29.975717 1848358 cri.go:89] found id: ""
	I1216 02:56:29.975778 1848358 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 02:56:29.994835 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:56:30.004346 1848358 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 02:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 02:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 16 02:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 02:46 /etc/kubernetes/scheduler.conf
	
	I1216 02:56:30.004430 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 02:56:30.041702 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 02:56:30.052507 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.052569 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:56:30.061943 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 02:56:30.073420 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.073488 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:56:30.083069 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 02:56:30.092935 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:56:30.092994 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:56:30.101587 1848358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:56:30.114178 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:30.166214 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.346212 1848358 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179973709s)
	I1216 02:56:31.346269 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.548322 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.601050 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 02:56:31.649581 1848358 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:56:31.649669 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:32.150228 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:32.649839 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:33.149820 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:33.650613 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:34.150733 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:34.649773 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:35.150705 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:35.649751 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:36.150703 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:36.650627 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:37.150392 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:37.649857 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:38.150375 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:38.650600 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:39.150146 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:39.649848 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:40.150319 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:40.650732 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:41.150402 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:41.649922 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:42.150742 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:42.649781 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:43.150590 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:43.650502 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:44.149866 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:44.649912 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:45.150004 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:45.650501 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:46.149734 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:46.649745 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:47.150639 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:47.649826 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:48.150565 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:48.649896 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:49.149744 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:49.650628 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:50.149885 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:50.649789 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:51.150643 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:51.649902 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:52.149806 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:52.650451 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:53.150140 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:53.649767 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:54.150751 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:54.650468 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:55.149878 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:55.650629 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:56.150781 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:56.650556 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:57.149864 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:57.650766 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:58.150741 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:58.649892 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:59.150551 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:56:59.650283 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:00.150247 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:00.650607 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:01.150638 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:01.650253 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:02.149858 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:02.650117 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:03.149960 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:03.649720 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:04.150726 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:04.650425 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:05.149866 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:05.649851 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:06.150611 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:06.650200 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:07.150444 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:07.650600 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:08.149853 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:08.650701 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:09.150579 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:09.649862 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:10.149858 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:10.650393 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:11.150022 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:11.649819 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:12.150562 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:12.649775 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:13.150489 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:13.650396 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:14.149848 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:14.649998 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:15.149945 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:15.649800 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:16.149886 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:16.650049 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:17.149847 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:17.649836 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:18.149898 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:18.649853 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:19.149883 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:19.649825 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:20.149732 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:20.650204 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:21.149852 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:21.649824 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:22.150472 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:22.650452 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:23.150780 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:23.650556 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:24.149887 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:24.650458 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:25.150518 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:25.650351 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:26.149849 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:26.650701 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:27.150612 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:27.650232 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:28.150399 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:28.650537 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:29.150626 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:29.650514 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:30.150439 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:30.650333 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:31.149886 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:31.650315 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:31.650394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:31.674930 1848358 cri.go:89] found id: ""
	I1216 02:57:31.674944 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.674951 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:31.674956 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:31.675016 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:31.714000 1848358 cri.go:89] found id: ""
	I1216 02:57:31.714013 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.714021 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:31.714026 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:31.714086 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:31.747840 1848358 cri.go:89] found id: ""
	I1216 02:57:31.747854 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.747861 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:31.747866 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:31.747926 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:31.773860 1848358 cri.go:89] found id: ""
	I1216 02:57:31.773874 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.773886 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:31.773891 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:31.773953 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:31.802242 1848358 cri.go:89] found id: ""
	I1216 02:57:31.802256 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.802263 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:31.802268 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:31.802327 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:31.827140 1848358 cri.go:89] found id: ""
	I1216 02:57:31.827170 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.827177 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:31.827183 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:31.827250 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:31.851813 1848358 cri.go:89] found id: ""
	I1216 02:57:31.851827 1848358 logs.go:282] 0 containers: []
	W1216 02:57:31.851834 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:31.851841 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:31.851852 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:31.907296 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:31.907315 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:31.924742 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:31.924759 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:31.990670 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:31.980837   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.981269   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.984774   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.985315   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.986770   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:31.980837   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.981269   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.984774   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.985315   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:31.986770   10765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:31.990681 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:31.990692 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:32.056720 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:32.056741 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:34.586741 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:34.596594 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:34.596656 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:34.624415 1848358 cri.go:89] found id: ""
	I1216 02:57:34.624430 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.624437 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:34.624454 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:34.624529 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:34.648856 1848358 cri.go:89] found id: ""
	I1216 02:57:34.648877 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.648884 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:34.648889 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:34.648952 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:34.674838 1848358 cri.go:89] found id: ""
	I1216 02:57:34.674852 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.674859 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:34.674864 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:34.674938 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:34.720068 1848358 cri.go:89] found id: ""
	I1216 02:57:34.720082 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.720089 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:34.720093 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:34.720152 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:34.749510 1848358 cri.go:89] found id: ""
	I1216 02:57:34.749525 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.749531 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:34.749541 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:34.749603 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:34.776711 1848358 cri.go:89] found id: ""
	I1216 02:57:34.776725 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.776732 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:34.776737 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:34.776797 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:34.801539 1848358 cri.go:89] found id: ""
	I1216 02:57:34.801552 1848358 logs.go:282] 0 containers: []
	W1216 02:57:34.801560 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:34.801568 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:34.801578 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:34.857992 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:34.858012 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:34.876290 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:34.876307 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:34.948190 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:34.939256   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.940046   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.941775   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.942456   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.944096   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:34.939256   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.940046   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.941775   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.942456   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:34.944096   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:34.948202 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:34.948213 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:35.015139 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:35.015162 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:37.549752 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:37.560125 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:37.560194 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:37.585130 1848358 cri.go:89] found id: ""
	I1216 02:57:37.585144 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.585151 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:37.585156 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:37.585216 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:37.610009 1848358 cri.go:89] found id: ""
	I1216 02:57:37.610023 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.610030 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:37.610035 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:37.610096 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:37.635414 1848358 cri.go:89] found id: ""
	I1216 02:57:37.635429 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.635436 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:37.635441 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:37.635503 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:37.660026 1848358 cri.go:89] found id: ""
	I1216 02:57:37.660046 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.660053 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:37.660059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:37.660119 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:37.702568 1848358 cri.go:89] found id: ""
	I1216 02:57:37.702583 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.702590 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:37.702595 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:37.702659 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:37.735671 1848358 cri.go:89] found id: ""
	I1216 02:57:37.735685 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.735693 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:37.735698 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:37.735766 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:37.764451 1848358 cri.go:89] found id: ""
	I1216 02:57:37.764465 1848358 logs.go:282] 0 containers: []
	W1216 02:57:37.764472 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:37.764481 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:37.764492 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:37.781790 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:37.781808 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:37.850130 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:37.841387   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.841981   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.843649   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845020   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845734   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:37.841387   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.841981   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.843649   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845020   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:37.845734   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:37.850150 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:37.850161 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:37.912286 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:37.912306 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:37.947545 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:37.947561 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:40.504032 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:40.514627 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:40.514689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:40.543498 1848358 cri.go:89] found id: ""
	I1216 02:57:40.543513 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.543520 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:40.543524 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:40.543593 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:40.568106 1848358 cri.go:89] found id: ""
	I1216 02:57:40.568120 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.568127 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:40.568132 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:40.568190 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:40.592290 1848358 cri.go:89] found id: ""
	I1216 02:57:40.592304 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.592317 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:40.592322 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:40.592382 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:40.617796 1848358 cri.go:89] found id: ""
	I1216 02:57:40.617811 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.617818 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:40.617823 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:40.617882 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:40.643710 1848358 cri.go:89] found id: ""
	I1216 02:57:40.643725 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.643732 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:40.643737 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:40.643811 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:40.672711 1848358 cri.go:89] found id: ""
	I1216 02:57:40.672731 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.672738 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:40.672743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:40.672802 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:40.704590 1848358 cri.go:89] found id: ""
	I1216 02:57:40.704604 1848358 logs.go:282] 0 containers: []
	W1216 02:57:40.704611 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:40.704620 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:40.704630 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:40.769622 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:40.769642 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:40.786992 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:40.787010 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:40.853579 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:40.844241   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.845343   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847164   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847823   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.849606   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:40.844241   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.845343   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847164   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.847823   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:40.849606   11089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:40.853590 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:40.853600 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:40.915814 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:40.915833 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:43.448229 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:43.458340 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:43.458399 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:43.481954 1848358 cri.go:89] found id: ""
	I1216 02:57:43.481967 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.481974 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:43.481979 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:43.482037 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:43.507588 1848358 cri.go:89] found id: ""
	I1216 02:57:43.507603 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.507610 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:43.507614 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:43.507684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:43.533164 1848358 cri.go:89] found id: ""
	I1216 02:57:43.533179 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.533188 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:43.533193 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:43.533255 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:43.558139 1848358 cri.go:89] found id: ""
	I1216 02:57:43.558152 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.558159 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:43.558164 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:43.558221 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:43.587218 1848358 cri.go:89] found id: ""
	I1216 02:57:43.587244 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.587251 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:43.587256 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:43.587315 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:43.613584 1848358 cri.go:89] found id: ""
	I1216 02:57:43.613598 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.613605 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:43.613610 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:43.613691 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:43.645887 1848358 cri.go:89] found id: ""
	I1216 02:57:43.645901 1848358 logs.go:282] 0 containers: []
	W1216 02:57:43.645908 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:43.645916 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:43.645928 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:43.662557 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:43.662574 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:43.745017 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:43.735622   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.736621   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738304   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738872   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.740427   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:43.735622   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.736621   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738304   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.738872   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:43.740427   11188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:43.745029 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:43.745040 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:43.808792 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:43.808811 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:43.837682 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:43.837698 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:46.396229 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:46.406230 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:46.406302 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:46.429707 1848358 cri.go:89] found id: ""
	I1216 02:57:46.429721 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.429728 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:46.429733 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:46.429796 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:46.454076 1848358 cri.go:89] found id: ""
	I1216 02:57:46.454090 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.454097 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:46.454101 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:46.454159 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:46.479472 1848358 cri.go:89] found id: ""
	I1216 02:57:46.479486 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.479493 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:46.479498 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:46.479557 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:46.505579 1848358 cri.go:89] found id: ""
	I1216 02:57:46.505592 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.505599 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:46.505605 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:46.505665 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:46.530373 1848358 cri.go:89] found id: ""
	I1216 02:57:46.530387 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.530394 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:46.530399 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:46.530464 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:46.554723 1848358 cri.go:89] found id: ""
	I1216 02:57:46.554736 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.554743 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:46.554748 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:46.554808 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:46.579147 1848358 cri.go:89] found id: ""
	I1216 02:57:46.579164 1848358 logs.go:282] 0 containers: []
	W1216 02:57:46.579171 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:46.579179 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:46.579189 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:46.634449 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:46.634473 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:46.651968 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:46.651988 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:46.739219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:46.722068   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.723633   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.732527   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.733244   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.734892   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:46.722068   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.723633   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.732527   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.733244   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:46.734892   11296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:46.739239 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:46.739250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:46.812956 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:46.812976 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:49.345440 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:49.356029 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:49.356092 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:49.381514 1848358 cri.go:89] found id: ""
	I1216 02:57:49.381528 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.381535 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:49.381540 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:49.381608 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:49.411765 1848358 cri.go:89] found id: ""
	I1216 02:57:49.411779 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.411786 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:49.411791 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:49.411854 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:49.440610 1848358 cri.go:89] found id: ""
	I1216 02:57:49.440624 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.440631 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:49.440637 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:49.440705 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:49.470688 1848358 cri.go:89] found id: ""
	I1216 02:57:49.470702 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.470709 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:49.470714 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:49.470774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:49.497170 1848358 cri.go:89] found id: ""
	I1216 02:57:49.497184 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.497191 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:49.497196 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:49.497254 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:49.521925 1848358 cri.go:89] found id: ""
	I1216 02:57:49.521940 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.521947 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:49.521952 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:49.522011 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:49.546344 1848358 cri.go:89] found id: ""
	I1216 02:57:49.546358 1848358 logs.go:282] 0 containers: []
	W1216 02:57:49.546366 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:49.546374 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:49.546385 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:49.602407 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:49.602426 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:49.619246 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:49.619263 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:49.683476 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:49.674979   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.675736   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677328   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677797   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.679458   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:49.674979   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.675736   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677328   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.677797   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:49.679458   11402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:49.683488 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:49.683499 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:49.752732 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:49.752753 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:52.289101 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:52.300210 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:52.300272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:52.327757 1848358 cri.go:89] found id: ""
	I1216 02:57:52.327772 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.327779 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:52.327784 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:52.327842 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:52.352750 1848358 cri.go:89] found id: ""
	I1216 02:57:52.352764 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.352771 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:52.352776 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:52.352834 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:52.377100 1848358 cri.go:89] found id: ""
	I1216 02:57:52.377114 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.377135 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:52.377140 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:52.377210 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:52.401376 1848358 cri.go:89] found id: ""
	I1216 02:57:52.401390 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.401397 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:52.401402 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:52.401462 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:52.428592 1848358 cri.go:89] found id: ""
	I1216 02:57:52.428606 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.428613 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:52.428618 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:52.428677 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:52.457192 1848358 cri.go:89] found id: ""
	I1216 02:57:52.457206 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.457213 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:52.457218 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:52.457276 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:52.481473 1848358 cri.go:89] found id: ""
	I1216 02:57:52.481494 1848358 logs.go:282] 0 containers: []
	W1216 02:57:52.481501 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:52.481509 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:52.481519 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:52.540087 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:52.540106 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:52.560374 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:52.560391 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:52.628219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:52.619222   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.619906   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.621689   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.622192   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.623773   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:52.619222   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.619906   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.621689   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.622192   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:52.623773   11506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:52.628231 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:52.628241 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:52.692110 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:52.692130 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:55.226607 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:55.236818 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:55.236879 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:55.265073 1848358 cri.go:89] found id: ""
	I1216 02:57:55.265087 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.265094 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:55.265099 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:55.265160 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:55.291262 1848358 cri.go:89] found id: ""
	I1216 02:57:55.291276 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.291284 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:55.291289 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:55.291357 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:55.320515 1848358 cri.go:89] found id: ""
	I1216 02:57:55.320539 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.320546 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:55.320551 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:55.320620 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:55.348402 1848358 cri.go:89] found id: ""
	I1216 02:57:55.348426 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.348433 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:55.348438 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:55.348500 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:55.373391 1848358 cri.go:89] found id: ""
	I1216 02:57:55.373405 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.373413 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:55.373418 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:55.373480 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:55.402098 1848358 cri.go:89] found id: ""
	I1216 02:57:55.402111 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.402118 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:55.402124 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:55.402183 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:55.427824 1848358 cri.go:89] found id: ""
	I1216 02:57:55.427838 1848358 logs.go:282] 0 containers: []
	W1216 02:57:55.427845 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:55.427853 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:55.427863 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:55.497187 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:55.497216 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:55.526960 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:55.526981 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:55.585085 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:55.585105 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:57:55.602223 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:55.602241 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:55.671427 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:55.662836   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.663796   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665445   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665748   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.667112   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:55.662836   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.663796   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665445   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.665748   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:55.667112   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:58.171689 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:57:58.181822 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:57:58.181885 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:57:58.206129 1848358 cri.go:89] found id: ""
	I1216 02:57:58.206143 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.206150 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:57:58.206155 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:57:58.206214 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:57:58.230940 1848358 cri.go:89] found id: ""
	I1216 02:57:58.230954 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.230960 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:57:58.230966 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:57:58.231024 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:57:58.256698 1848358 cri.go:89] found id: ""
	I1216 02:57:58.256712 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.256720 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:57:58.256724 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:57:58.256788 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:57:58.281370 1848358 cri.go:89] found id: ""
	I1216 02:57:58.281385 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.281392 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:57:58.281396 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:57:58.281456 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:57:58.313032 1848358 cri.go:89] found id: ""
	I1216 02:57:58.313046 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.313054 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:57:58.313059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:57:58.313124 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:57:58.337968 1848358 cri.go:89] found id: ""
	I1216 02:57:58.337982 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.337989 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:57:58.337994 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:57:58.338052 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:57:58.367215 1848358 cri.go:89] found id: ""
	I1216 02:57:58.367231 1848358 logs.go:282] 0 containers: []
	W1216 02:57:58.367239 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:57:58.367247 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:57:58.367259 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:57:58.433078 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:57:58.423612   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.424320   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426139   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426759   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.428489   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:57:58.423612   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.424320   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426139   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.426759   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:57:58.428489   11713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:57:58.433088 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:57:58.433099 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:57:58.496751 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:57:58.496771 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:57:58.528345 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:57:58.528362 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:57:58.585231 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:57:58.585249 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:01.103256 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:01.114505 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:01.114572 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:01.141817 1848358 cri.go:89] found id: ""
	I1216 02:58:01.141831 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.141838 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:01.141843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:01.141908 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:01.170638 1848358 cri.go:89] found id: ""
	I1216 02:58:01.170653 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.170660 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:01.170667 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:01.170733 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:01.197958 1848358 cri.go:89] found id: ""
	I1216 02:58:01.197973 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.197980 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:01.197986 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:01.198051 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:01.225715 1848358 cri.go:89] found id: ""
	I1216 02:58:01.225731 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.225738 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:01.225744 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:01.225803 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:01.256157 1848358 cri.go:89] found id: ""
	I1216 02:58:01.256171 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.256178 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:01.256184 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:01.256244 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:01.281610 1848358 cri.go:89] found id: ""
	I1216 02:58:01.281625 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.281633 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:01.281638 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:01.281702 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:01.306348 1848358 cri.go:89] found id: ""
	I1216 02:58:01.306363 1848358 logs.go:282] 0 containers: []
	W1216 02:58:01.306370 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:01.306377 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:01.306388 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:01.335207 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:01.335224 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:01.392222 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:01.392242 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:01.408874 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:01.408890 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:01.472601 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:01.464071   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.464670   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.466724   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.467464   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.468593   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:01.464071   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.464670   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.466724   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.467464   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:01.468593   11837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:01.472613 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:01.472626 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:04.035738 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:04.046578 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:04.046661 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:04.072441 1848358 cri.go:89] found id: ""
	I1216 02:58:04.072456 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.072463 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:04.072468 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:04.072531 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:04.103113 1848358 cri.go:89] found id: ""
	I1216 02:58:04.103128 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.103135 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:04.103139 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:04.103208 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:04.127981 1848358 cri.go:89] found id: ""
	I1216 02:58:04.127995 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.128002 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:04.128007 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:04.128067 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:04.153050 1848358 cri.go:89] found id: ""
	I1216 02:58:04.153065 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.153072 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:04.153077 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:04.153139 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:04.176840 1848358 cri.go:89] found id: ""
	I1216 02:58:04.176854 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.176879 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:04.176885 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:04.176954 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:04.205747 1848358 cri.go:89] found id: ""
	I1216 02:58:04.205771 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.205779 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:04.205784 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:04.205853 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:04.234453 1848358 cri.go:89] found id: ""
	I1216 02:58:04.234467 1848358 logs.go:282] 0 containers: []
	W1216 02:58:04.234474 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:04.234483 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:04.234505 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:04.294713 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:04.294732 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:04.312011 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:04.312029 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:04.378295 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:04.369406   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.370236   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372024   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372637   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.374434   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:04.369406   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.370236   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372024   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.372637   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:04.374434   11930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:04.378314 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:04.378325 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:04.440962 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:04.440984 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:06.970088 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:06.983751 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:06.983819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:07.013657 1848358 cri.go:89] found id: ""
	I1216 02:58:07.013672 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.013679 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:07.013684 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:07.013752 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:07.038882 1848358 cri.go:89] found id: ""
	I1216 02:58:07.038896 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.038904 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:07.038909 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:07.038968 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:07.064215 1848358 cri.go:89] found id: ""
	I1216 02:58:07.064230 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.064237 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:07.064242 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:07.064304 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:07.088144 1848358 cri.go:89] found id: ""
	I1216 02:58:07.088158 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.088165 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:07.088170 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:07.088229 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:07.112044 1848358 cri.go:89] found id: ""
	I1216 02:58:07.112059 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.112066 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:07.112071 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:07.112137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:07.138570 1848358 cri.go:89] found id: ""
	I1216 02:58:07.138586 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.138593 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:07.138599 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:07.138658 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:07.166931 1848358 cri.go:89] found id: ""
	I1216 02:58:07.166945 1848358 logs.go:282] 0 containers: []
	W1216 02:58:07.166952 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:07.166959 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:07.166973 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:07.197292 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:07.197308 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:07.255003 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:07.255023 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:07.273531 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:07.273547 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:07.338842 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:07.330204   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.330976   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.332722   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.333328   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.334951   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:07.330204   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.330976   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.332722   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.333328   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:07.334951   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:07.338852 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:07.338863 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:09.902725 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:09.913150 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:09.913213 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:09.946614 1848358 cri.go:89] found id: ""
	I1216 02:58:09.946627 1848358 logs.go:282] 0 containers: []
	W1216 02:58:09.946634 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:09.946639 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:09.946703 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:09.975470 1848358 cri.go:89] found id: ""
	I1216 02:58:09.975484 1848358 logs.go:282] 0 containers: []
	W1216 02:58:09.975491 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:09.975496 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:09.975557 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:10.002745 1848358 cri.go:89] found id: ""
	I1216 02:58:10.002773 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.002782 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:10.002787 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:10.002866 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:10.035489 1848358 cri.go:89] found id: ""
	I1216 02:58:10.035504 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.035512 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:10.035517 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:10.035581 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:10.062019 1848358 cri.go:89] found id: ""
	I1216 02:58:10.062044 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.062052 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:10.062059 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:10.062139 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:10.088952 1848358 cri.go:89] found id: ""
	I1216 02:58:10.088977 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.088986 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:10.088991 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:10.089061 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:10.115714 1848358 cri.go:89] found id: ""
	I1216 02:58:10.115736 1848358 logs.go:282] 0 containers: []
	W1216 02:58:10.115744 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:10.115752 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:10.115762 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:10.172504 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:10.172524 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:10.190804 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:10.190821 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:10.258662 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:10.249875   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.250514   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252114   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252638   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.254176   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:10.249875   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.250514   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252114   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.252638   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:10.254176   12143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:10.258675 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:10.258686 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:10.321543 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:10.321562 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:12.849334 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:12.859284 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:12.859345 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:12.884624 1848358 cri.go:89] found id: ""
	I1216 02:58:12.884640 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.884648 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:12.884653 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:12.884722 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:12.908735 1848358 cri.go:89] found id: ""
	I1216 02:58:12.908749 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.908756 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:12.908761 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:12.908819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:12.944827 1848358 cri.go:89] found id: ""
	I1216 02:58:12.944841 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.944848 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:12.944854 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:12.944917 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:12.974281 1848358 cri.go:89] found id: ""
	I1216 02:58:12.974295 1848358 logs.go:282] 0 containers: []
	W1216 02:58:12.974302 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:12.974308 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:12.974367 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:13.008278 1848358 cri.go:89] found id: ""
	I1216 02:58:13.008294 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.008302 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:13.008307 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:13.008376 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:13.034272 1848358 cri.go:89] found id: ""
	I1216 02:58:13.034286 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.034294 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:13.034299 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:13.034361 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:13.064663 1848358 cri.go:89] found id: ""
	I1216 02:58:13.064688 1848358 logs.go:282] 0 containers: []
	W1216 02:58:13.064695 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:13.064703 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:13.064716 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:13.127826 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:13.127848 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:13.158482 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:13.158498 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:13.218053 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:13.218072 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:13.234830 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:13.234846 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:13.298317 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:13.289893   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.290910   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.291765   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293309   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293580   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:13.289893   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.290910   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.291765   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293309   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:13.293580   12263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:15.798590 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:15.809144 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:15.809225 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:15.834683 1848358 cri.go:89] found id: ""
	I1216 02:58:15.834696 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.834704 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:15.834709 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:15.834774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:15.860001 1848358 cri.go:89] found id: ""
	I1216 02:58:15.860030 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.860038 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:15.860042 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:15.860113 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:15.884488 1848358 cri.go:89] found id: ""
	I1216 02:58:15.884503 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.884510 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:15.884515 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:15.884572 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:15.908030 1848358 cri.go:89] found id: ""
	I1216 02:58:15.908045 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.908051 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:15.908056 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:15.908116 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:15.932641 1848358 cri.go:89] found id: ""
	I1216 02:58:15.932654 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.932661 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:15.932666 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:15.932723 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:15.962741 1848358 cri.go:89] found id: ""
	I1216 02:58:15.962754 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.962772 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:15.962779 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:15.962836 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:15.990774 1848358 cri.go:89] found id: ""
	I1216 02:58:15.990788 1848358 logs.go:282] 0 containers: []
	W1216 02:58:15.990806 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:15.990829 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:15.990838 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:16.067729 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:16.067748 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:16.098615 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:16.098635 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:16.154944 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:16.154963 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:16.172510 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:16.172527 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:16.237380 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:16.229269   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.229868   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231379   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231950   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.233594   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:16.229269   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.229868   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231379   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.231950   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:16.233594   12366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:18.738100 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:18.751636 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:18.751717 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:18.779608 1848358 cri.go:89] found id: ""
	I1216 02:58:18.779622 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.779629 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:18.779634 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:18.779693 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:18.805721 1848358 cri.go:89] found id: ""
	I1216 02:58:18.805735 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.805742 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:18.805747 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:18.805812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:18.831187 1848358 cri.go:89] found id: ""
	I1216 02:58:18.831203 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.831210 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:18.831215 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:18.831280 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:18.857343 1848358 cri.go:89] found id: ""
	I1216 02:58:18.857367 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.857375 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:18.857380 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:18.857448 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:18.882737 1848358 cri.go:89] found id: ""
	I1216 02:58:18.882751 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.882758 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:18.882765 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:18.882834 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:18.907486 1848358 cri.go:89] found id: ""
	I1216 02:58:18.907500 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.907508 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:18.907513 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:18.907573 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:18.939361 1848358 cri.go:89] found id: ""
	I1216 02:58:18.939375 1848358 logs.go:282] 0 containers: []
	W1216 02:58:18.939382 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:18.939390 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:18.939401 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:19.019241 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:19.010485   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.010907   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.012525   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.013150   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.014918   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:19.010485   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.010907   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.012525   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.013150   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:19.014918   12452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:19.019251 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:19.019262 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:19.081820 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:19.081842 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:19.110025 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:19.110042 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:19.166216 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:19.166236 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:21.684597 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:21.694910 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:21.694974 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:21.719581 1848358 cri.go:89] found id: ""
	I1216 02:58:21.719595 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.719602 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:21.719607 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:21.719670 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:21.745661 1848358 cri.go:89] found id: ""
	I1216 02:58:21.745675 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.745682 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:21.745688 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:21.745745 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:21.770329 1848358 cri.go:89] found id: ""
	I1216 02:58:21.770342 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.770349 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:21.770354 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:21.770425 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:21.795402 1848358 cri.go:89] found id: ""
	I1216 02:58:21.795416 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.795423 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:21.795434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:21.795492 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:21.821959 1848358 cri.go:89] found id: ""
	I1216 02:58:21.821972 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.821979 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:21.821984 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:21.822043 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:21.845121 1848358 cri.go:89] found id: ""
	I1216 02:58:21.845135 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.845142 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:21.845148 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:21.845209 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:21.868958 1848358 cri.go:89] found id: ""
	I1216 02:58:21.868972 1848358 logs.go:282] 0 containers: []
	W1216 02:58:21.868979 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:21.868987 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:21.868997 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:21.932460 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:21.924049   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.924916   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926515   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926825   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.928346   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:21.924049   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.924916   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926515   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.926825   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:21.928346   12553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:21.932490 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:21.932502 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:22.006384 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:22.006415 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:22.040639 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:22.040655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:22.097981 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:22.098000 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:24.615636 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:24.626423 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:24.626486 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:24.650890 1848358 cri.go:89] found id: ""
	I1216 02:58:24.650904 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.650911 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:24.650916 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:24.650984 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:24.676132 1848358 cri.go:89] found id: ""
	I1216 02:58:24.676146 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.676153 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:24.676158 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:24.676219 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:24.705732 1848358 cri.go:89] found id: ""
	I1216 02:58:24.705746 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.705753 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:24.705758 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:24.705820 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:24.729899 1848358 cri.go:89] found id: ""
	I1216 02:58:24.729914 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.729922 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:24.729927 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:24.729988 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:24.760724 1848358 cri.go:89] found id: ""
	I1216 02:58:24.760744 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.760752 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:24.760756 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:24.760821 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:24.789128 1848358 cri.go:89] found id: ""
	I1216 02:58:24.789144 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.789151 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:24.789157 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:24.789221 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:24.814525 1848358 cri.go:89] found id: ""
	I1216 02:58:24.814539 1848358 logs.go:282] 0 containers: []
	W1216 02:58:24.814548 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:24.814555 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:24.814567 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:24.845234 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:24.845251 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:24.904816 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:24.904835 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:24.922721 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:24.922744 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:25.017286 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:25.006539   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.007431   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.009403   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.010041   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.013452   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:25.006539   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.007431   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.009403   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.010041   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:25.013452   12677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:25.017298 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:25.017309 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:27.580148 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:27.590499 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:27.590563 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:27.614749 1848358 cri.go:89] found id: ""
	I1216 02:58:27.614764 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.614771 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:27.614776 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:27.614835 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:27.638735 1848358 cri.go:89] found id: ""
	I1216 02:58:27.638749 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.638756 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:27.638762 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:27.638821 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:27.665480 1848358 cri.go:89] found id: ""
	I1216 02:58:27.665495 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.665503 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:27.665508 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:27.665565 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:27.695981 1848358 cri.go:89] found id: ""
	I1216 02:58:27.695996 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.696004 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:27.696009 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:27.696088 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:27.720368 1848358 cri.go:89] found id: ""
	I1216 02:58:27.720390 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.720397 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:27.720403 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:27.720469 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:27.746357 1848358 cri.go:89] found id: ""
	I1216 02:58:27.746371 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.746377 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:27.746383 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:27.746441 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:27.770684 1848358 cri.go:89] found id: ""
	I1216 02:58:27.770708 1848358 logs.go:282] 0 containers: []
	W1216 02:58:27.770716 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:27.770724 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:27.770734 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:27.836245 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:27.836265 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:27.865946 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:27.865964 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:27.924653 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:27.924675 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:27.945999 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:27.946015 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:28.027275 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:28.018924   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.019507   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021144   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021655   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.023260   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:28.018924   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.019507   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021144   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.021655   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:28.023260   12788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:30.527490 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:30.537746 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:30.537811 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:30.562783 1848358 cri.go:89] found id: ""
	I1216 02:58:30.562797 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.562805 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:30.562810 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:30.562882 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:30.587495 1848358 cri.go:89] found id: ""
	I1216 02:58:30.587509 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.587515 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:30.587521 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:30.587583 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:30.611375 1848358 cri.go:89] found id: ""
	I1216 02:58:30.611392 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.611400 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:30.611406 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:30.611472 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:30.635442 1848358 cri.go:89] found id: ""
	I1216 02:58:30.635457 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.635464 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:30.635469 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:30.635527 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:30.659725 1848358 cri.go:89] found id: ""
	I1216 02:58:30.659745 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.659752 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:30.659757 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:30.659819 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:30.683639 1848358 cri.go:89] found id: ""
	I1216 02:58:30.683654 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.683661 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:30.683666 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:30.683725 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:30.709231 1848358 cri.go:89] found id: ""
	I1216 02:58:30.709246 1848358 logs.go:282] 0 containers: []
	W1216 02:58:30.709252 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:30.709260 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:30.709271 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:30.765116 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:30.765136 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:30.782213 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:30.782230 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:30.843173 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:30.835436   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.836096   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837188   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837822   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.839482   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:30.835436   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.836096   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837188   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.837822   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:30.839482   12874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:30.843184 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:30.843195 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:30.905457 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:30.905477 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:33.448949 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:33.458942 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:33.459006 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:33.494559 1848358 cri.go:89] found id: ""
	I1216 02:58:33.494573 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.494582 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:33.494602 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:33.494672 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:33.521008 1848358 cri.go:89] found id: ""
	I1216 02:58:33.521028 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.521036 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:33.521041 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:33.521103 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:33.545598 1848358 cri.go:89] found id: ""
	I1216 02:58:33.545613 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.545620 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:33.545625 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:33.545684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:33.573194 1848358 cri.go:89] found id: ""
	I1216 02:58:33.573207 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.573214 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:33.573219 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:33.573284 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:33.597747 1848358 cri.go:89] found id: ""
	I1216 02:58:33.597761 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.597784 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:33.597789 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:33.597859 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:33.621788 1848358 cri.go:89] found id: ""
	I1216 02:58:33.621803 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.621810 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:33.621815 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:33.621892 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:33.646528 1848358 cri.go:89] found id: ""
	I1216 02:58:33.646543 1848358 logs.go:282] 0 containers: []
	W1216 02:58:33.646550 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:33.646557 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:33.646567 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:33.708165 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:33.708187 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:33.736001 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:33.736018 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:33.791763 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:33.791786 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:33.808896 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:33.808912 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:33.876753 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:33.868694   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.869434   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871119   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871558   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.873040   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:33.868694   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.869434   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871119   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.871558   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:33.873040   12991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:36.376982 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:36.386962 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:36.387033 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:36.410927 1848358 cri.go:89] found id: ""
	I1216 02:58:36.410941 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.410948 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:36.410954 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:36.411013 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:36.436158 1848358 cri.go:89] found id: ""
	I1216 02:58:36.436171 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.436179 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:36.436189 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:36.436260 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:36.460716 1848358 cri.go:89] found id: ""
	I1216 02:58:36.460730 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.460737 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:36.460743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:36.460815 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:36.485244 1848358 cri.go:89] found id: ""
	I1216 02:58:36.485258 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.485266 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:36.485272 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:36.485335 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:36.509347 1848358 cri.go:89] found id: ""
	I1216 02:58:36.509361 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.509368 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:36.509374 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:36.509434 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:36.534352 1848358 cri.go:89] found id: ""
	I1216 02:58:36.534367 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.534374 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:36.534419 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:36.534481 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:36.560075 1848358 cri.go:89] found id: ""
	I1216 02:58:36.560090 1848358 logs.go:282] 0 containers: []
	W1216 02:58:36.560097 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:36.560105 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:36.560116 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:36.618652 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:36.618670 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:36.635627 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:36.635643 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:36.704527 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:36.695687   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.696277   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698043   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698600   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.700190   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:36.695687   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.696277   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698043   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.698600   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:36.700190   13085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:36.704537 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:36.704550 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:36.767179 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:36.767199 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:39.295686 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:39.305848 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:39.305909 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:39.329771 1848358 cri.go:89] found id: ""
	I1216 02:58:39.329785 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.329792 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:39.329797 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:39.329857 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:39.354814 1848358 cri.go:89] found id: ""
	I1216 02:58:39.354829 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.354836 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:39.354841 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:39.354900 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:39.380095 1848358 cri.go:89] found id: ""
	I1216 02:58:39.380110 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.380117 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:39.380122 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:39.380182 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:39.404438 1848358 cri.go:89] found id: ""
	I1216 02:58:39.404453 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.404460 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:39.404465 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:39.404526 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:39.432615 1848358 cri.go:89] found id: ""
	I1216 02:58:39.432630 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.432636 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:39.432644 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:39.432709 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:39.456879 1848358 cri.go:89] found id: ""
	I1216 02:58:39.456893 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.456900 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:39.456905 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:39.456966 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:39.481400 1848358 cri.go:89] found id: ""
	I1216 02:58:39.481415 1848358 logs.go:282] 0 containers: []
	W1216 02:58:39.481421 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:39.481430 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:39.481441 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:39.540413 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:39.540433 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:39.558600 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:39.558618 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:39.623191 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:39.614791   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.615619   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617365   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617686   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.619229   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:39.614791   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.615619   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617365   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.617686   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:39.619229   13188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:39.623201 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:39.623212 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:39.685663 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:39.685683 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:42.212532 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:42.242820 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:42.242893 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:42.277407 1848358 cri.go:89] found id: ""
	I1216 02:58:42.277427 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.277435 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:42.277441 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:42.277513 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:42.313862 1848358 cri.go:89] found id: ""
	I1216 02:58:42.313877 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.313893 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:42.313898 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:42.313963 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:42.345979 1848358 cri.go:89] found id: ""
	I1216 02:58:42.345995 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.346003 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:42.346009 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:42.346075 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:42.372530 1848358 cri.go:89] found id: ""
	I1216 02:58:42.372545 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.372552 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:42.372558 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:42.372622 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:42.400807 1848358 cri.go:89] found id: ""
	I1216 02:58:42.400821 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.400829 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:42.400834 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:42.400901 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:42.426053 1848358 cri.go:89] found id: ""
	I1216 02:58:42.426067 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.426074 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:42.426079 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:42.426137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:42.453460 1848358 cri.go:89] found id: ""
	I1216 02:58:42.453475 1848358 logs.go:282] 0 containers: []
	W1216 02:58:42.453482 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:42.453490 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:42.453500 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:42.509219 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:42.509237 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:42.526995 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:42.527011 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:42.589697 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:42.581361   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.581873   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.583507   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.584135   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.585772   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:42.581361   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.581873   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.583507   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.584135   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:42.585772   13295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:42.589706 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:42.589723 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:42.655306 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:42.655326 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:45.183328 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:45.217035 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:45.217117 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:45.257225 1848358 cri.go:89] found id: ""
	I1216 02:58:45.257247 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.257258 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:45.257264 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:45.257334 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:45.304389 1848358 cri.go:89] found id: ""
	I1216 02:58:45.304407 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.304416 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:45.304423 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:45.304509 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:45.334339 1848358 cri.go:89] found id: ""
	I1216 02:58:45.334354 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.334362 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:45.334367 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:45.334435 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:45.360176 1848358 cri.go:89] found id: ""
	I1216 02:58:45.360190 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.360198 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:45.360203 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:45.360263 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:45.384648 1848358 cri.go:89] found id: ""
	I1216 02:58:45.384663 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.384669 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:45.384678 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:45.384738 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:45.411115 1848358 cri.go:89] found id: ""
	I1216 02:58:45.411131 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.411138 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:45.411144 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:45.411218 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:45.437746 1848358 cri.go:89] found id: ""
	I1216 02:58:45.437761 1848358 logs.go:282] 0 containers: []
	W1216 02:58:45.437768 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:45.437776 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:45.437797 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:45.500791 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:45.500811 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:45.530882 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:45.530899 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:45.588591 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:45.588609 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:45.605872 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:45.605900 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:45.673187 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:45.663658   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.664990   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.665900   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.667592   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.668146   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:45.663658   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.664990   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.665900   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.667592   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:45.668146   13417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:48.173453 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:48.186360 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:48.186425 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:48.216541 1848358 cri.go:89] found id: ""
	I1216 02:58:48.216556 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.216563 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:48.216568 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:48.216633 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:48.243385 1848358 cri.go:89] found id: ""
	I1216 02:58:48.243399 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.243407 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:48.243412 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:48.243473 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:48.268738 1848358 cri.go:89] found id: ""
	I1216 02:58:48.268752 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.268759 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:48.268764 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:48.268825 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:48.293634 1848358 cri.go:89] found id: ""
	I1216 02:58:48.293649 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.293657 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:48.293662 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:48.293722 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:48.320780 1848358 cri.go:89] found id: ""
	I1216 02:58:48.320796 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.320805 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:48.320810 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:48.320872 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:48.344687 1848358 cri.go:89] found id: ""
	I1216 02:58:48.344701 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.344710 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:48.344715 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:48.344775 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:48.368368 1848358 cri.go:89] found id: ""
	I1216 02:58:48.368383 1848358 logs.go:282] 0 containers: []
	W1216 02:58:48.368390 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:48.368398 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:48.368407 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:48.424495 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:48.424515 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:48.441644 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:48.441660 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:48.506701 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:48.498238   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.498920   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.500649   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.501232   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.502941   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:48.498238   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.498920   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.500649   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.501232   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:48.502941   13509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:48.506710 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:48.506721 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:48.569962 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:48.569984 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:51.098190 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:51.108977 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:51.109048 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:51.134223 1848358 cri.go:89] found id: ""
	I1216 02:58:51.134237 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.134244 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:51.134249 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:51.134310 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:51.161239 1848358 cri.go:89] found id: ""
	I1216 02:58:51.161253 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.161261 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:51.161266 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:51.161326 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:51.202211 1848358 cri.go:89] found id: ""
	I1216 02:58:51.202225 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.202232 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:51.202237 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:51.202296 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:51.233630 1848358 cri.go:89] found id: ""
	I1216 02:58:51.233651 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.233658 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:51.233663 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:51.233728 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:51.270204 1848358 cri.go:89] found id: ""
	I1216 02:58:51.270219 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.270233 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:51.270238 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:51.270301 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:51.298689 1848358 cri.go:89] found id: ""
	I1216 02:58:51.298705 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.298716 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:51.298722 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:51.298799 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:51.323107 1848358 cri.go:89] found id: ""
	I1216 02:58:51.323126 1848358 logs.go:282] 0 containers: []
	W1216 02:58:51.323133 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:51.323140 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:51.323150 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:51.386665 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:51.386693 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:51.404372 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:51.404391 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:51.469512 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:51.460793   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.461262   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.462922   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.463490   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.465130   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:51.460793   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.461262   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.462922   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.463490   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:51.465130   13613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:51.469532 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:51.469554 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:51.535704 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:51.535725 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:54.065223 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:54.077244 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:54.077307 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:54.106090 1848358 cri.go:89] found id: ""
	I1216 02:58:54.106103 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.106110 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:54.106115 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:54.106177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:54.131805 1848358 cri.go:89] found id: ""
	I1216 02:58:54.131819 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.131833 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:54.131838 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:54.131899 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:54.156816 1848358 cri.go:89] found id: ""
	I1216 02:58:54.156829 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.156837 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:54.156842 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:54.156901 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:54.181654 1848358 cri.go:89] found id: ""
	I1216 02:58:54.181669 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.181693 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:54.181698 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:54.181765 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:54.219797 1848358 cri.go:89] found id: ""
	I1216 02:58:54.219812 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.219819 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:54.219833 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:54.219910 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:54.251176 1848358 cri.go:89] found id: ""
	I1216 02:58:54.251190 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.251197 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:54.251202 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:54.251265 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:54.275716 1848358 cri.go:89] found id: ""
	I1216 02:58:54.275731 1848358 logs.go:282] 0 containers: []
	W1216 02:58:54.275739 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:54.275747 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:54.275758 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:54.338395 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:54.330425   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.330860   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332372   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332678   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.334183   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:54.330425   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.330860   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332372   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.332678   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:54.334183   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:54.338408 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:54.338429 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:54.401729 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:54.401749 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:54.429361 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:54.429376 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:54.489525 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:54.489545 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:57.006993 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:57.017732 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:57.017792 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:57.042221 1848358 cri.go:89] found id: ""
	I1216 02:58:57.042235 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.042242 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:57.042248 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:57.042316 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:58:57.069364 1848358 cri.go:89] found id: ""
	I1216 02:58:57.069378 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.069385 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:58:57.069390 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:58:57.069450 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:58:57.093795 1848358 cri.go:89] found id: ""
	I1216 02:58:57.093808 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.093815 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:58:57.093820 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:58:57.093881 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:58:57.118148 1848358 cri.go:89] found id: ""
	I1216 02:58:57.118161 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.118168 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:58:57.118177 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:58:57.118235 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:58:57.142161 1848358 cri.go:89] found id: ""
	I1216 02:58:57.142175 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.142182 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:58:57.142187 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:58:57.142247 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:58:57.169165 1848358 cri.go:89] found id: ""
	I1216 02:58:57.169178 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.169186 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:58:57.169191 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:58:57.169256 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:58:57.200840 1848358 cri.go:89] found id: ""
	I1216 02:58:57.200855 1848358 logs.go:282] 0 containers: []
	W1216 02:58:57.200862 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:58:57.200870 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:58:57.200881 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:58:57.260426 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:58:57.260444 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:58:57.285637 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:58:57.285654 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:58:57.350704 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:58:57.342168   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.342999   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.344857   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.345195   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.346755   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:58:57.342168   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.342999   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.344857   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.345195   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:58:57.346755   13825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:58:57.350714 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:58:57.350727 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:58:57.413587 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:58:57.413606 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:58:59.944007 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:58:59.954621 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:58:59.954685 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:58:59.979450 1848358 cri.go:89] found id: ""
	I1216 02:58:59.979466 1848358 logs.go:282] 0 containers: []
	W1216 02:58:59.979474 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:58:59.979479 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:58:59.979543 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:00.040218 1848358 cri.go:89] found id: ""
	I1216 02:59:00.040237 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.040245 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:00.040251 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:00.040325 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:00.225643 1848358 cri.go:89] found id: ""
	I1216 02:59:00.225659 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.225666 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:00.225679 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:00.225749 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:00.292916 1848358 cri.go:89] found id: ""
	I1216 02:59:00.292933 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.292941 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:00.292947 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:00.293016 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:00.327359 1848358 cri.go:89] found id: ""
	I1216 02:59:00.327375 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.327383 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:00.327389 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:00.327463 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:00.362091 1848358 cri.go:89] found id: ""
	I1216 02:59:00.362107 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.362116 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:00.362121 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:00.362205 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:00.392615 1848358 cri.go:89] found id: ""
	I1216 02:59:00.392648 1848358 logs.go:282] 0 containers: []
	W1216 02:59:00.392656 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:00.392665 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:00.392677 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:00.411628 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:00.411646 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:00.485425 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:00.476589   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.477159   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.478750   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.479401   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.480376   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:00.476589   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.477159   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.478750   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.479401   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:00.480376   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:00.485435 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:00.485446 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:00.548759 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:00.548779 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:00.579219 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:00.579235 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:03.138643 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:03.151350 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:03.151414 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:03.177456 1848358 cri.go:89] found id: ""
	I1216 02:59:03.177480 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.177489 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:03.177494 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:03.177576 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:03.209025 1848358 cri.go:89] found id: ""
	I1216 02:59:03.209054 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.209063 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:03.209068 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:03.209142 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:03.245557 1848358 cri.go:89] found id: ""
	I1216 02:59:03.245571 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.245578 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:03.245583 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:03.245651 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:03.273887 1848358 cri.go:89] found id: ""
	I1216 02:59:03.273902 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.273909 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:03.273914 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:03.273980 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:03.299955 1848358 cri.go:89] found id: ""
	I1216 02:59:03.299970 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.299977 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:03.299987 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:03.300050 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:03.325891 1848358 cri.go:89] found id: ""
	I1216 02:59:03.325906 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.325913 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:03.325918 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:03.325977 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:03.353059 1848358 cri.go:89] found id: ""
	I1216 02:59:03.353073 1848358 logs.go:282] 0 containers: []
	W1216 02:59:03.353080 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:03.353088 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:03.353101 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:03.409018 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:03.409040 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:03.427124 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:03.427141 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:03.498219 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:03.489642   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.490076   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.491637   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.492014   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.493527   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:03.489642   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.490076   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.491637   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.492014   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:03.493527   14036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:03.498236 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:03.498250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:03.563005 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:03.563031 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:06.091678 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:06.102426 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:06.102489 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:06.127426 1848358 cri.go:89] found id: ""
	I1216 02:59:06.127439 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.127446 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:06.127452 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:06.127511 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:06.152255 1848358 cri.go:89] found id: ""
	I1216 02:59:06.152270 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.152277 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:06.152282 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:06.152344 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:06.181806 1848358 cri.go:89] found id: ""
	I1216 02:59:06.181832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.181840 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:06.181846 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:06.181909 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:06.211543 1848358 cri.go:89] found id: ""
	I1216 02:59:06.211558 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.211565 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:06.211576 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:06.211638 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:06.239433 1848358 cri.go:89] found id: ""
	I1216 02:59:06.239448 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.239454 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:06.239460 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:06.239521 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:06.265180 1848358 cri.go:89] found id: ""
	I1216 02:59:06.265199 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.265206 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:06.265212 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:06.265273 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:06.288594 1848358 cri.go:89] found id: ""
	I1216 02:59:06.288608 1848358 logs.go:282] 0 containers: []
	W1216 02:59:06.288615 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:06.288622 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:06.288633 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:06.347416 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:06.347440 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:06.365120 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:06.365137 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:06.429753 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:06.422151   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.422683   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424172   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424494   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.425949   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:06.422151   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.422683   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424172   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.424494   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:06.425949   14143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:06.429762 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:06.429772 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:06.491187 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:06.491205 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:09.021976 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:09.032138 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:09.032199 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:09.056495 1848358 cri.go:89] found id: ""
	I1216 02:59:09.056509 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.056517 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:09.056522 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:09.056579 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:09.085249 1848358 cri.go:89] found id: ""
	I1216 02:59:09.085263 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.085269 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:09.085275 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:09.085336 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:09.109270 1848358 cri.go:89] found id: ""
	I1216 02:59:09.109284 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.109291 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:09.109296 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:09.109365 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:09.134217 1848358 cri.go:89] found id: ""
	I1216 02:59:09.134231 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.134238 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:09.134243 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:09.134305 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:09.158656 1848358 cri.go:89] found id: ""
	I1216 02:59:09.158670 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.158677 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:09.158682 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:09.158749 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:09.190922 1848358 cri.go:89] found id: ""
	I1216 02:59:09.190937 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.190944 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:09.190949 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:09.191020 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:09.231605 1848358 cri.go:89] found id: ""
	I1216 02:59:09.231619 1848358 logs.go:282] 0 containers: []
	W1216 02:59:09.231633 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:09.231642 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:09.231652 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:09.293613 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:09.293633 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:09.310949 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:09.310966 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:09.378806 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:09.369691   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.370608   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372360   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372849   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.374328   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:09.369691   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.370608   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372360   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.372849   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:09.374328   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:09.378816 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:09.378827 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:09.440510 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:09.440528 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:11.972007 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:11.982340 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:11.982402 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:12.014868 1848358 cri.go:89] found id: ""
	I1216 02:59:12.014883 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.014890 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:12.014895 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:12.014969 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:12.040987 1848358 cri.go:89] found id: ""
	I1216 02:59:12.041002 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.041008 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:12.041013 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:12.041090 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:12.065526 1848358 cri.go:89] found id: ""
	I1216 02:59:12.065540 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.065561 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:12.065566 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:12.065635 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:12.093806 1848358 cri.go:89] found id: ""
	I1216 02:59:12.093833 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.093841 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:12.093849 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:12.093921 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:12.121567 1848358 cri.go:89] found id: ""
	I1216 02:59:12.121595 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.121602 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:12.121607 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:12.121677 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:12.144869 1848358 cri.go:89] found id: ""
	I1216 02:59:12.144883 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.144890 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:12.144895 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:12.144955 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:12.168723 1848358 cri.go:89] found id: ""
	I1216 02:59:12.168737 1848358 logs.go:282] 0 containers: []
	W1216 02:59:12.168744 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:12.168752 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:12.168769 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:12.185531 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:12.185547 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:12.264487 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:12.255585   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.256344   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258045   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258783   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.260488   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:12.255585   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.256344   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258045   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.258783   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:12.260488   14351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:12.264497 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:12.264508 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:12.326049 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:12.326068 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:12.353200 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:12.353216 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:14.910970 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:14.924577 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:14.924643 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:14.953399 1848358 cri.go:89] found id: ""
	I1216 02:59:14.953413 1848358 logs.go:282] 0 containers: []
	W1216 02:59:14.953420 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:14.953432 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:14.953495 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:14.978792 1848358 cri.go:89] found id: ""
	I1216 02:59:14.978806 1848358 logs.go:282] 0 containers: []
	W1216 02:59:14.978815 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:14.978821 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:14.978880 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:15.008511 1848358 cri.go:89] found id: ""
	I1216 02:59:15.008528 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.008536 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:15.008542 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:15.008624 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:15.053197 1848358 cri.go:89] found id: ""
	I1216 02:59:15.053213 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.053220 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:15.053226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:15.053293 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:15.082542 1848358 cri.go:89] found id: ""
	I1216 02:59:15.082557 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.082564 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:15.082570 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:15.082634 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:15.109527 1848358 cri.go:89] found id: ""
	I1216 02:59:15.109542 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.109550 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:15.109556 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:15.109634 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:15.137809 1848358 cri.go:89] found id: ""
	I1216 02:59:15.137823 1848358 logs.go:282] 0 containers: []
	W1216 02:59:15.137830 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:15.137838 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:15.137849 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:15.211501 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:15.202592   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.203475   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205236   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205549   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.207125   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:15.202592   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.203475   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205236   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.205549   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:15.207125   14450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:15.211511 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:15.211523 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:15.285555 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:15.285576 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:15.314442 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:15.314458 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:15.370796 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:15.370818 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:17.889239 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:17.899171 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:17.899236 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:17.924099 1848358 cri.go:89] found id: ""
	I1216 02:59:17.924113 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.924121 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:17.924126 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:17.924187 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:17.950817 1848358 cri.go:89] found id: ""
	I1216 02:59:17.950832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.950838 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:17.950843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:17.950903 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:17.976899 1848358 cri.go:89] found id: ""
	I1216 02:59:17.976913 1848358 logs.go:282] 0 containers: []
	W1216 02:59:17.976920 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:17.976925 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:17.976987 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:18.003139 1848358 cri.go:89] found id: ""
	I1216 02:59:18.003156 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.003164 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:18.003169 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:18.003244 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:18.032644 1848358 cri.go:89] found id: ""
	I1216 02:59:18.032659 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.032666 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:18.032671 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:18.032740 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:18.058880 1848358 cri.go:89] found id: ""
	I1216 02:59:18.058895 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.058906 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:18.058915 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:18.058988 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:18.084275 1848358 cri.go:89] found id: ""
	I1216 02:59:18.084290 1848358 logs.go:282] 0 containers: []
	W1216 02:59:18.084298 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:18.084306 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:18.084318 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:18.146637 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:18.146665 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:18.164002 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:18.164022 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:18.241086 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:18.231204   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.232053   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.233635   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.234184   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.235838   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:18.231204   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.232053   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.233635   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.234184   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:18.235838   14562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:18.241097 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:18.241110 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:18.306777 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:18.306796 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:20.840754 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:20.850885 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:20.850942 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:20.880985 1848358 cri.go:89] found id: ""
	I1216 02:59:20.881000 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.881007 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:20.881012 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:20.881071 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:20.904789 1848358 cri.go:89] found id: ""
	I1216 02:59:20.904803 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.904810 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:20.904815 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:20.904873 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:20.929350 1848358 cri.go:89] found id: ""
	I1216 02:59:20.929362 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.929370 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:20.929381 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:20.929438 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:20.953473 1848358 cri.go:89] found id: ""
	I1216 02:59:20.953487 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.953493 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:20.953499 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:20.953558 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:20.977718 1848358 cri.go:89] found id: ""
	I1216 02:59:20.977731 1848358 logs.go:282] 0 containers: []
	W1216 02:59:20.977738 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:20.977743 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:20.977800 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:21.001640 1848358 cri.go:89] found id: ""
	I1216 02:59:21.001657 1848358 logs.go:282] 0 containers: []
	W1216 02:59:21.001664 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:21.001669 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:21.001752 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:21.030827 1848358 cri.go:89] found id: ""
	I1216 02:59:21.030840 1848358 logs.go:282] 0 containers: []
	W1216 02:59:21.030847 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:21.030855 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:21.030865 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:21.086683 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:21.086703 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:21.106615 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:21.106638 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:21.196393 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:21.180051   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.181461   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.182376   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186322   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186918   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:21.180051   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.181461   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.182376   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186322   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:21.186918   14670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:21.196410 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:21.196420 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:21.259711 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:21.259730 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:23.788985 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:23.801081 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:23.801153 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:23.831711 1848358 cri.go:89] found id: ""
	I1216 02:59:23.831732 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.831740 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:23.831745 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:23.831812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:23.857025 1848358 cri.go:89] found id: ""
	I1216 02:59:23.857040 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.857047 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:23.857052 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:23.857115 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:23.885653 1848358 cri.go:89] found id: ""
	I1216 02:59:23.885667 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.885674 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:23.885679 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:23.885739 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:23.912974 1848358 cri.go:89] found id: ""
	I1216 02:59:23.912987 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.912996 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:23.913001 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:23.913062 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:23.936892 1848358 cri.go:89] found id: ""
	I1216 02:59:23.936906 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.936914 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:23.936919 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:23.936978 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:23.959826 1848358 cri.go:89] found id: ""
	I1216 02:59:23.959841 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.959848 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:23.959853 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:23.959912 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:23.987747 1848358 cri.go:89] found id: ""
	I1216 02:59:23.987760 1848358 logs.go:282] 0 containers: []
	W1216 02:59:23.987767 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:23.987775 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:23.987785 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:24.043435 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:24.043453 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:24.060830 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:24.060848 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:24.129870 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:24.121071   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.121882   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.123511   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.124023   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.125643   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:24.121071   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.121882   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.123511   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.124023   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:24.125643   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:24.129882 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:24.129893 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:24.192043 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:24.192064 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:26.722933 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:26.733462 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:26.733528 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:26.757094 1848358 cri.go:89] found id: ""
	I1216 02:59:26.757109 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.757115 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:26.757121 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:26.757190 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:26.785265 1848358 cri.go:89] found id: ""
	I1216 02:59:26.785279 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.785286 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:26.785291 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:26.785348 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:26.809734 1848358 cri.go:89] found id: ""
	I1216 02:59:26.809748 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.809755 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:26.809760 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:26.809823 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:26.833900 1848358 cri.go:89] found id: ""
	I1216 02:59:26.833914 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.833921 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:26.833926 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:26.833983 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:26.858364 1848358 cri.go:89] found id: ""
	I1216 02:59:26.858381 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.858388 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:26.858392 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:26.858476 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:26.884221 1848358 cri.go:89] found id: ""
	I1216 02:59:26.884235 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.884242 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:26.884247 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:26.884306 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:26.909747 1848358 cri.go:89] found id: ""
	I1216 02:59:26.909761 1848358 logs.go:282] 0 containers: []
	W1216 02:59:26.909768 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:26.909776 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:26.909785 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:26.965217 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:26.965237 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:26.982549 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:26.982573 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:27.049273 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:27.041323   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.041704   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043346   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043928   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.045499   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:27.041323   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.041704   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043346   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.043928   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:27.045499   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:27.049282 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:27.049293 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:27.112656 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:27.112677 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:29.642709 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:29.652965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:29.653051 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:29.681994 1848358 cri.go:89] found id: ""
	I1216 02:59:29.682008 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.682030 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:29.682037 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:29.682106 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:29.710335 1848358 cri.go:89] found id: ""
	I1216 02:59:29.710350 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.710357 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:29.710363 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:29.710454 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:29.737846 1848358 cri.go:89] found id: ""
	I1216 02:59:29.737861 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.737868 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:29.737873 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:29.737943 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:29.763917 1848358 cri.go:89] found id: ""
	I1216 02:59:29.763931 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.763938 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:29.763944 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:29.764015 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:29.788324 1848358 cri.go:89] found id: ""
	I1216 02:59:29.788338 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.788345 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:29.788351 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:29.788409 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:29.812477 1848358 cri.go:89] found id: ""
	I1216 02:59:29.812490 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.812497 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:29.812502 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:29.812561 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:29.840464 1848358 cri.go:89] found id: ""
	I1216 02:59:29.840479 1848358 logs.go:282] 0 containers: []
	W1216 02:59:29.840486 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:29.840495 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:29.840509 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:29.905495 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:29.897197   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.898006   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899547   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899856   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.901335   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:29.897197   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.898006   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899547   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.899856   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:29.901335   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:29.905505 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:29.905515 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:29.967090 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:29.967110 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:29.999894 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:29.999910 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:30.095570 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:30.095596 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:32.614024 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:32.624941 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:32.625007 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:32.649578 1848358 cri.go:89] found id: ""
	I1216 02:59:32.649593 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.649601 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:32.649606 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:32.649665 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:32.678365 1848358 cri.go:89] found id: ""
	I1216 02:59:32.678379 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.678386 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:32.678391 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:32.678450 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:32.703205 1848358 cri.go:89] found id: ""
	I1216 02:59:32.703219 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.703226 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:32.703231 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:32.703295 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:32.727484 1848358 cri.go:89] found id: ""
	I1216 02:59:32.727499 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.727506 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:32.727511 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:32.727568 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:32.753092 1848358 cri.go:89] found id: ""
	I1216 02:59:32.753106 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.753113 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:32.753119 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:32.753178 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:32.781551 1848358 cri.go:89] found id: ""
	I1216 02:59:32.781565 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.781572 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:32.781577 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:32.781636 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:32.807153 1848358 cri.go:89] found id: ""
	I1216 02:59:32.807168 1848358 logs.go:282] 0 containers: []
	W1216 02:59:32.807176 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:32.807184 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:32.807199 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:32.863763 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:32.863782 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:32.880478 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:32.880495 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:32.950082 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:32.941362   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.942084   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.943575   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.944217   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.945946   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:32.941362   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.942084   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.943575   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.944217   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:32.945946   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:32.950092 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:32.950102 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:33.016099 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:33.016121 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:35.546066 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:35.557055 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:35.557115 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:35.582927 1848358 cri.go:89] found id: ""
	I1216 02:59:35.582951 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.582960 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:35.582965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:35.583033 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:35.608110 1848358 cri.go:89] found id: ""
	I1216 02:59:35.608124 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.608131 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:35.608141 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:35.608203 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:35.632465 1848358 cri.go:89] found id: ""
	I1216 02:59:35.632479 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.632485 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:35.632490 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:35.632555 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:35.661165 1848358 cri.go:89] found id: ""
	I1216 02:59:35.661179 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.661198 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:35.661204 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:35.661272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:35.686050 1848358 cri.go:89] found id: ""
	I1216 02:59:35.686064 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.686081 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:35.686087 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:35.686156 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:35.711189 1848358 cri.go:89] found id: ""
	I1216 02:59:35.711203 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.711210 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:35.711215 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:35.711276 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:35.735024 1848358 cri.go:89] found id: ""
	I1216 02:59:35.735072 1848358 logs.go:282] 0 containers: []
	W1216 02:59:35.735080 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:35.735089 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:35.735099 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:35.790017 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:35.790036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:35.807195 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:35.807212 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:35.870014 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:35.862369   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.862875   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864373   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864766   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.866205   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:35.862369   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.862875   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864373   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.864766   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:35.866205   15198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:35.870024 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:35.870036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:35.933113 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:35.933134 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:38.460684 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:38.471131 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:38.471193 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:38.508160 1848358 cri.go:89] found id: ""
	I1216 02:59:38.508175 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.508183 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:38.508188 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:38.508257 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:38.540297 1848358 cri.go:89] found id: ""
	I1216 02:59:38.540312 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.540320 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:38.540324 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:38.540388 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:38.566230 1848358 cri.go:89] found id: ""
	I1216 02:59:38.566244 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.566252 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:38.566257 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:38.566321 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:38.591818 1848358 cri.go:89] found id: ""
	I1216 02:59:38.591832 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.591839 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:38.591844 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:38.591911 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:38.618603 1848358 cri.go:89] found id: ""
	I1216 02:59:38.618617 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.618624 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:38.618629 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:38.618689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:38.643310 1848358 cri.go:89] found id: ""
	I1216 02:59:38.643324 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.643331 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:38.643337 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:38.643402 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:38.667065 1848358 cri.go:89] found id: ""
	I1216 02:59:38.667080 1848358 logs.go:282] 0 containers: []
	W1216 02:59:38.667087 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:38.667095 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:38.667106 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:38.699522 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:38.699540 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:38.757880 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:38.757898 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:38.774888 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:38.774903 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:38.842015 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:38.834115   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.834681   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836207   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836756   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.838251   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:38.834115   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.834681   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836207   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.836756   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:38.838251   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:38.842025 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:38.842036 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:41.405157 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:41.416379 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:41.416447 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:41.446560 1848358 cri.go:89] found id: ""
	I1216 02:59:41.446578 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.446596 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:41.446602 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:41.446675 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:41.483188 1848358 cri.go:89] found id: ""
	I1216 02:59:41.483202 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.483209 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:41.483213 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:41.483274 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:41.516110 1848358 cri.go:89] found id: ""
	I1216 02:59:41.516140 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.516147 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:41.516152 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:41.516218 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:41.540839 1848358 cri.go:89] found id: ""
	I1216 02:59:41.540853 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.540860 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:41.540866 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:41.540926 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:41.566596 1848358 cri.go:89] found id: ""
	I1216 02:59:41.566622 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.566629 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:41.566634 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:41.566706 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:41.590702 1848358 cri.go:89] found id: ""
	I1216 02:59:41.590717 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.590724 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:41.590729 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:41.590791 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:41.616252 1848358 cri.go:89] found id: ""
	I1216 02:59:41.616276 1848358 logs.go:282] 0 containers: []
	W1216 02:59:41.616283 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:41.616291 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:41.616303 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:41.645509 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:41.645525 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:41.704141 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:41.704159 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:41.721706 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:41.721725 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:41.783974 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:41.776246   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.776782   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778294   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778717   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.780191   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:41.776246   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.776782   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778294   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.778717   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:41.780191   15418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:41.783984 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:41.784019 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:44.346692 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:44.357118 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:44.357181 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:44.382575 1848358 cri.go:89] found id: ""
	I1216 02:59:44.382589 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.382596 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:44.382601 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:44.382666 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:44.407349 1848358 cri.go:89] found id: ""
	I1216 02:59:44.407363 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.407370 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:44.407375 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:44.407442 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:44.438660 1848358 cri.go:89] found id: ""
	I1216 02:59:44.438674 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.438681 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:44.438693 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:44.438748 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:44.483154 1848358 cri.go:89] found id: ""
	I1216 02:59:44.483168 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.483175 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:44.483180 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:44.483239 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:44.512253 1848358 cri.go:89] found id: ""
	I1216 02:59:44.512267 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.512274 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:44.512283 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:44.512341 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:44.537396 1848358 cri.go:89] found id: ""
	I1216 02:59:44.537410 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.537427 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:44.537434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:44.537510 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:44.562261 1848358 cri.go:89] found id: ""
	I1216 02:59:44.562275 1848358 logs.go:282] 0 containers: []
	W1216 02:59:44.562283 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:44.562291 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:44.562300 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:44.630850 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:44.630877 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:44.660268 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:44.660294 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:44.721274 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:44.721294 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:44.738464 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:44.738482 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:44.804552 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:44.796057   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.796830   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798532   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798943   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.800543   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:44.796057   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.796830   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798532   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.798943   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:44.800543   15526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:47.304816 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:47.315117 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:47.315178 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:47.344292 1848358 cri.go:89] found id: ""
	I1216 02:59:47.344306 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.344314 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:47.344319 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:47.344381 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:47.367920 1848358 cri.go:89] found id: ""
	I1216 02:59:47.367934 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.367942 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:47.367947 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:47.368017 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:47.392383 1848358 cri.go:89] found id: ""
	I1216 02:59:47.392397 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.392404 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:47.392409 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:47.392473 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:47.415620 1848358 cri.go:89] found id: ""
	I1216 02:59:47.415634 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.415641 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:47.415646 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:47.415703 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:47.454281 1848358 cri.go:89] found id: ""
	I1216 02:59:47.454295 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.454302 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:47.454308 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:47.454367 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:47.487808 1848358 cri.go:89] found id: ""
	I1216 02:59:47.487822 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.487829 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:47.487834 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:47.487893 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:47.515510 1848358 cri.go:89] found id: ""
	I1216 02:59:47.515523 1848358 logs.go:282] 0 containers: []
	W1216 02:59:47.515531 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:47.515538 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:47.515551 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:47.582935 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:47.574325   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.575137   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.576881   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.577372   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.578856   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:47.574325   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.575137   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.576881   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.577372   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:47.578856   15614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:47.582951 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:47.582963 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:47.644716 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:47.644735 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:47.673055 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:47.673071 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:47.729448 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:47.729467 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:50.247207 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:50.257829 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:50.257894 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:50.282406 1848358 cri.go:89] found id: ""
	I1216 02:59:50.282422 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.282429 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:50.282435 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:50.282497 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:50.307428 1848358 cri.go:89] found id: ""
	I1216 02:59:50.307442 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.307450 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:50.307455 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:50.307514 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:50.332093 1848358 cri.go:89] found id: ""
	I1216 02:59:50.332107 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.332114 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:50.332120 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:50.332179 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:50.357137 1848358 cri.go:89] found id: ""
	I1216 02:59:50.357151 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.357158 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:50.357163 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:50.357227 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:50.380923 1848358 cri.go:89] found id: ""
	I1216 02:59:50.380938 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.380945 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:50.380950 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:50.381008 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:50.404673 1848358 cri.go:89] found id: ""
	I1216 02:59:50.404687 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.404695 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:50.404700 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:50.404762 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:50.428594 1848358 cri.go:89] found id: ""
	I1216 02:59:50.428609 1848358 logs.go:282] 0 containers: []
	W1216 02:59:50.428616 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:50.428624 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:50.428634 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:50.511977 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:50.503194   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.503744   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505371   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505827   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.507476   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:50.503194   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.503744   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505371   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.505827   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:50.507476   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:50.511987 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:50.511998 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:50.575372 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:50.575394 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:50.603193 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:50.603215 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:50.660351 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:50.660370 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:53.177329 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:53.187812 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:53.187876 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:53.212765 1848358 cri.go:89] found id: ""
	I1216 02:59:53.212780 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.212787 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:53.212792 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:53.212855 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:53.237571 1848358 cri.go:89] found id: ""
	I1216 02:59:53.237584 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.237591 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:53.237596 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:53.237657 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:53.261989 1848358 cri.go:89] found id: ""
	I1216 02:59:53.262003 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.262010 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:53.262015 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:53.262077 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:53.291843 1848358 cri.go:89] found id: ""
	I1216 02:59:53.291857 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.291864 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:53.291869 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:53.291929 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:53.316569 1848358 cri.go:89] found id: ""
	I1216 02:59:53.316583 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.316590 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:53.316595 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:53.316655 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:53.340200 1848358 cri.go:89] found id: ""
	I1216 02:59:53.340214 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.340221 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:53.340226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:53.340284 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:53.364767 1848358 cri.go:89] found id: ""
	I1216 02:59:53.364782 1848358 logs.go:282] 0 containers: []
	W1216 02:59:53.364789 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:53.364796 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:53.364806 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:53.423540 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:53.423559 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:53.440975 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:53.440990 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:53.518181 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:53.509741   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.510408   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512145   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512724   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.514366   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:53.509741   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.510408   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512145   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.512724   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:53.514366   15829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:53.518190 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:53.518201 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:53.580231 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:53.580250 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:56.109099 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:56.119430 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:56.119493 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:56.144050 1848358 cri.go:89] found id: ""
	I1216 02:59:56.144064 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.144072 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:56.144077 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:56.144137 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:56.168768 1848358 cri.go:89] found id: ""
	I1216 02:59:56.168783 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.168790 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:56.168794 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:56.168858 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:56.193611 1848358 cri.go:89] found id: ""
	I1216 02:59:56.193625 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.193633 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:56.193637 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:56.193694 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:56.218383 1848358 cri.go:89] found id: ""
	I1216 02:59:56.218396 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.218415 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:56.218420 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:56.218532 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:56.244850 1848358 cri.go:89] found id: ""
	I1216 02:59:56.244864 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.244871 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:56.244888 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:56.244960 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:56.272142 1848358 cri.go:89] found id: ""
	I1216 02:59:56.272167 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.272174 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:56.272181 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:56.272252 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:56.296464 1848358 cri.go:89] found id: ""
	I1216 02:59:56.296478 1848358 logs.go:282] 0 containers: []
	W1216 02:59:56.296485 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:56.296493 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:56.296503 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:56.351797 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:56.351818 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:56.368635 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:56.368655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:56.433327 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:56.425076   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.425853   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.427469   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.428121   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.429570   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:56.425076   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.425853   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.427469   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.428121   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:56.429570   15929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 02:59:56.433336 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:56.433346 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:56.509361 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:56.509380 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:59.037187 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:59:59.047286 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 02:59:59.047351 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 02:59:59.072817 1848358 cri.go:89] found id: ""
	I1216 02:59:59.072831 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.072838 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 02:59:59.072843 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 02:59:59.072914 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 02:59:59.098681 1848358 cri.go:89] found id: ""
	I1216 02:59:59.098696 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.098708 1848358 logs.go:284] No container was found matching "etcd"
	I1216 02:59:59.098713 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 02:59:59.098774 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 02:59:59.124932 1848358 cri.go:89] found id: ""
	I1216 02:59:59.124945 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.124953 1848358 logs.go:284] No container was found matching "coredns"
	I1216 02:59:59.124958 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 02:59:59.125017 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 02:59:59.149561 1848358 cri.go:89] found id: ""
	I1216 02:59:59.149575 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.149581 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 02:59:59.149586 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 02:59:59.149646 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 02:59:59.174402 1848358 cri.go:89] found id: ""
	I1216 02:59:59.174417 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.174426 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 02:59:59.174431 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 02:59:59.174497 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 02:59:59.199717 1848358 cri.go:89] found id: ""
	I1216 02:59:59.199732 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.199740 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 02:59:59.199745 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 02:59:59.199812 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 02:59:59.225754 1848358 cri.go:89] found id: ""
	I1216 02:59:59.225768 1848358 logs.go:282] 0 containers: []
	W1216 02:59:59.225787 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 02:59:59.225795 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 02:59:59.225806 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 02:59:59.288033 1848358 logs.go:123] Gathering logs for container status ...
	I1216 02:59:59.288058 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 02:59:59.316114 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 02:59:59.316130 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 02:59:59.373962 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 02:59:59.373981 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 02:59:59.390958 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 02:59:59.390978 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 02:59:59.466112 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 02:59:59.455145   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.456493   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458087   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458370   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.462047   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 02:59:59.455145   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.456493   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458087   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.458370   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 02:59:59.462047   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:01.968417 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:01.996618 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:01.996689 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:02.075342 1848358 cri.go:89] found id: ""
	I1216 03:00:02.075366 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.075373 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:02.075379 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:02.075457 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:02.107614 1848358 cri.go:89] found id: ""
	I1216 03:00:02.107629 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.107637 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:02.107646 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:02.107720 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:02.137752 1848358 cri.go:89] found id: ""
	I1216 03:00:02.137768 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.137776 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:02.137782 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:02.137853 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:02.169435 1848358 cri.go:89] found id: ""
	I1216 03:00:02.169452 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.169459 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:02.169465 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:02.169546 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:02.198391 1848358 cri.go:89] found id: ""
	I1216 03:00:02.198423 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.198431 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:02.198438 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:02.198511 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:02.227862 1848358 cri.go:89] found id: ""
	I1216 03:00:02.227877 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.227885 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:02.227891 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:02.227959 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:02.256236 1848358 cri.go:89] found id: ""
	I1216 03:00:02.256251 1848358 logs.go:282] 0 containers: []
	W1216 03:00:02.256269 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:02.256278 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:02.256290 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:02.315559 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:02.315582 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:02.334230 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:02.334248 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:02.404903 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:02.395443   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.396222   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.398711   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.399382   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.400828   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:02.395443   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.396222   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.398711   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.399382   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:02.400828   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:02.404912 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:02.404923 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:02.469074 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:02.469095 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:05.003993 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:05.018300 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:05.018420 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:05.047301 1848358 cri.go:89] found id: ""
	I1216 03:00:05.047316 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.047323 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:05.047335 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:05.047400 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:05.072682 1848358 cri.go:89] found id: ""
	I1216 03:00:05.072697 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.072704 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:05.072709 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:05.072770 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:05.102478 1848358 cri.go:89] found id: ""
	I1216 03:00:05.102493 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.102502 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:05.102507 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:05.102578 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:05.132728 1848358 cri.go:89] found id: ""
	I1216 03:00:05.132743 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.132750 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:05.132756 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:05.132825 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:05.158706 1848358 cri.go:89] found id: ""
	I1216 03:00:05.158721 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.158728 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:05.158733 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:05.158795 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:05.184666 1848358 cri.go:89] found id: ""
	I1216 03:00:05.184681 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.184688 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:05.184694 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:05.184756 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:05.216197 1848358 cri.go:89] found id: ""
	I1216 03:00:05.216213 1848358 logs.go:282] 0 containers: []
	W1216 03:00:05.216221 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:05.216229 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:05.216239 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:05.278419 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:05.278439 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:05.309753 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:05.309771 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:05.366862 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:05.366880 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:05.384427 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:05.384446 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:05.452157 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:05.443910   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.444698   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446307   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446727   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.448188   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:05.443910   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.444698   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446307   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.446727   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:05.448188   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:07.952402 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:07.967145 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:07.967225 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:07.998164 1848358 cri.go:89] found id: ""
	I1216 03:00:07.998178 1848358 logs.go:282] 0 containers: []
	W1216 03:00:07.998185 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:07.998191 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:07.998251 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:08.032873 1848358 cri.go:89] found id: ""
	I1216 03:00:08.032889 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.032896 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:08.032901 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:08.032964 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:08.059832 1848358 cri.go:89] found id: ""
	I1216 03:00:08.059846 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.059854 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:08.059859 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:08.059933 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:08.087232 1848358 cri.go:89] found id: ""
	I1216 03:00:08.087246 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.087253 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:08.087258 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:08.087316 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:08.114253 1848358 cri.go:89] found id: ""
	I1216 03:00:08.114267 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.114274 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:08.114280 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:08.114343 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:08.139972 1848358 cri.go:89] found id: ""
	I1216 03:00:08.139987 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.139994 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:08.139999 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:08.140141 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:08.165613 1848358 cri.go:89] found id: ""
	I1216 03:00:08.165628 1848358 logs.go:282] 0 containers: []
	W1216 03:00:08.165637 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:08.165645 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:08.165655 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:08.221696 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:08.221715 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:08.240189 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:08.240206 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:08.320945 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:08.311750   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.312401   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314217   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314799   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.316399   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:08.311750   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.312401   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314217   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.314799   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:08.316399   16345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:08.320954 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:08.320964 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:08.384243 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:08.384275 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:10.913864 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:10.926998 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:10.927108 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:10.962440 1848358 cri.go:89] found id: ""
	I1216 03:00:10.962454 1848358 logs.go:282] 0 containers: []
	W1216 03:00:10.962461 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:10.962466 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:10.962526 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:11.004569 1848358 cri.go:89] found id: ""
	I1216 03:00:11.004589 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.004598 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:11.004610 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:11.005096 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:11.034401 1848358 cri.go:89] found id: ""
	I1216 03:00:11.034415 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.034429 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:11.034434 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:11.034508 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:11.065292 1848358 cri.go:89] found id: ""
	I1216 03:00:11.065309 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.065317 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:11.065325 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:11.065394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:11.092043 1848358 cri.go:89] found id: ""
	I1216 03:00:11.092057 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.092065 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:11.092070 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:11.092163 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:11.121914 1848358 cri.go:89] found id: ""
	I1216 03:00:11.121929 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.121936 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:11.121942 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:11.122014 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:11.147863 1848358 cri.go:89] found id: ""
	I1216 03:00:11.147879 1848358 logs.go:282] 0 containers: []
	W1216 03:00:11.147886 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:11.147894 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:11.147906 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:11.213267 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:11.213287 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:11.231545 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:11.231561 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:11.303516 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:11.294839   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.295358   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.296660   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.297169   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.298964   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:11.294839   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.295358   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.296660   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.297169   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:11.298964   16452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:11.303525 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:11.303544 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:11.375152 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:11.375181 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:13.905997 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:13.916685 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:13.916754 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:13.946670 1848358 cri.go:89] found id: ""
	I1216 03:00:13.946698 1848358 logs.go:282] 0 containers: []
	W1216 03:00:13.946705 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:13.946711 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:13.946782 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:13.978544 1848358 cri.go:89] found id: ""
	I1216 03:00:13.978558 1848358 logs.go:282] 0 containers: []
	W1216 03:00:13.978565 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:13.978570 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:13.978630 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:14.010045 1848358 cri.go:89] found id: ""
	I1216 03:00:14.010060 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.010068 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:14.010073 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:14.010148 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:14.039695 1848358 cri.go:89] found id: ""
	I1216 03:00:14.039709 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.039717 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:14.039722 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:14.039786 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:14.065918 1848358 cri.go:89] found id: ""
	I1216 03:00:14.065932 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.065939 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:14.065944 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:14.066002 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:14.092594 1848358 cri.go:89] found id: ""
	I1216 03:00:14.092607 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.092615 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:14.092620 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:14.092684 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:14.117022 1848358 cri.go:89] found id: ""
	I1216 03:00:14.117036 1848358 logs.go:282] 0 containers: []
	W1216 03:00:14.117043 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:14.117052 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:14.117063 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:14.145392 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:14.145409 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:14.201319 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:14.201338 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:14.218382 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:14.218397 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:14.286945 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:14.279281   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.279802   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281416   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281996   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.283003   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:14.279281   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.279802   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281416   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.281996   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:14.283003   16566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:14.286956 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:14.286968 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:16.848830 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:16.859224 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:16.859288 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:16.900559 1848358 cri.go:89] found id: ""
	I1216 03:00:16.900573 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.900580 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:16.900586 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:16.900660 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:16.925198 1848358 cri.go:89] found id: ""
	I1216 03:00:16.925213 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.925221 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:16.925226 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:16.925288 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:16.968532 1848358 cri.go:89] found id: ""
	I1216 03:00:16.968545 1848358 logs.go:282] 0 containers: []
	W1216 03:00:16.968552 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:16.968557 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:16.968620 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:17.001327 1848358 cri.go:89] found id: ""
	I1216 03:00:17.001343 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.001351 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:17.001357 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:17.001427 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:17.029828 1848358 cri.go:89] found id: ""
	I1216 03:00:17.029843 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.029850 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:17.029855 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:17.029917 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:17.055865 1848358 cri.go:89] found id: ""
	I1216 03:00:17.055880 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.055887 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:17.055892 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:17.055956 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:17.081782 1848358 cri.go:89] found id: ""
	I1216 03:00:17.081796 1848358 logs.go:282] 0 containers: []
	W1216 03:00:17.081804 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:17.081812 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:17.081823 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:17.137664 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:17.137684 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:17.155387 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:17.155413 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:17.223693 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:17.215814   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.216359   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.217875   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.218284   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.219788   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:17.215814   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.216359   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.217875   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.218284   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:17.219788   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:17.223704 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:17.223715 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:17.285895 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:17.285915 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:19.819792 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:19.830531 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:19.830595 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:19.855374 1848358 cri.go:89] found id: ""
	I1216 03:00:19.855388 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.855395 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:19.855400 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:19.855459 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:19.880613 1848358 cri.go:89] found id: ""
	I1216 03:00:19.880627 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.880634 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:19.880639 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:19.880701 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:19.905217 1848358 cri.go:89] found id: ""
	I1216 03:00:19.905231 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.905238 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:19.905243 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:19.905306 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:19.938230 1848358 cri.go:89] found id: ""
	I1216 03:00:19.938245 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.938252 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:19.938257 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:19.938318 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:19.972308 1848358 cri.go:89] found id: ""
	I1216 03:00:19.972322 1848358 logs.go:282] 0 containers: []
	W1216 03:00:19.972330 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:19.972335 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:19.972396 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:20.009826 1848358 cri.go:89] found id: ""
	I1216 03:00:20.009843 1848358 logs.go:282] 0 containers: []
	W1216 03:00:20.009851 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:20.009857 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:20.009931 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:20.047016 1848358 cri.go:89] found id: ""
	I1216 03:00:20.047031 1848358 logs.go:282] 0 containers: []
	W1216 03:00:20.047075 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:20.047084 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:20.047095 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:20.105420 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:20.105444 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:20.123806 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:20.123824 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:20.193387 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:20.184716   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.185705   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.187519   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.188104   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.189189   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:20.184716   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.185705   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.187519   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.188104   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:20.189189   16759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:20.193399 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:20.193410 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:20.256212 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:20.256232 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:22.788953 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:22.799143 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:22.799205 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:22.824912 1848358 cri.go:89] found id: ""
	I1216 03:00:22.824926 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.824933 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:22.824938 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:22.824999 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:22.848993 1848358 cri.go:89] found id: ""
	I1216 03:00:22.849007 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.849014 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:22.849019 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:22.849077 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:22.873445 1848358 cri.go:89] found id: ""
	I1216 03:00:22.873467 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.873476 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:22.873481 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:22.873548 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:22.898928 1848358 cri.go:89] found id: ""
	I1216 03:00:22.898952 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.898960 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:22.898965 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:22.899088 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:22.924441 1848358 cri.go:89] found id: ""
	I1216 03:00:22.924455 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.924462 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:22.924471 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:22.924536 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:22.972165 1848358 cri.go:89] found id: ""
	I1216 03:00:22.972187 1848358 logs.go:282] 0 containers: []
	W1216 03:00:22.972194 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:22.972200 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:22.972272 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:23.007998 1848358 cri.go:89] found id: ""
	I1216 03:00:23.008014 1848358 logs.go:282] 0 containers: []
	W1216 03:00:23.008021 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:23.008030 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:23.008041 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:23.074846 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:23.065592   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.066370   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.068447   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.069048   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.070772   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:23.065592   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.066370   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.068447   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.069048   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:23.070772   16856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:23.074856 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:23.074867 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:23.141968 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:23.141990 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:23.170755 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:23.170772 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:23.229156 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:23.229176 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:25.746547 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:25.757092 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:25.757177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:25.781744 1848358 cri.go:89] found id: ""
	I1216 03:00:25.781758 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.781765 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:25.781770 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:25.781829 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:25.810185 1848358 cri.go:89] found id: ""
	I1216 03:00:25.810200 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.810207 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:25.810212 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:25.810273 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:25.837797 1848358 cri.go:89] found id: ""
	I1216 03:00:25.837810 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.837818 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:25.837822 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:25.837881 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:25.864444 1848358 cri.go:89] found id: ""
	I1216 03:00:25.864466 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.864474 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:25.864479 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:25.864537 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:25.889170 1848358 cri.go:89] found id: ""
	I1216 03:00:25.889185 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.889192 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:25.889197 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:25.889253 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:25.913381 1848358 cri.go:89] found id: ""
	I1216 03:00:25.913396 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.913403 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:25.913409 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:25.913468 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:25.956168 1848358 cri.go:89] found id: ""
	I1216 03:00:25.956184 1848358 logs.go:282] 0 containers: []
	W1216 03:00:25.956191 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:25.956199 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:25.956209 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:25.987017 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:25.987032 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:26.056762 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:26.056783 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:26.074582 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:26.074599 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:26.142533 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:26.133438   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.134117   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.135700   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.136346   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.138045   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:26.133438   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.134117   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.135700   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.136346   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:26.138045   16983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:26.142543 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:26.142554 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:28.704757 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:28.715093 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:00:28.715171 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:00:28.756309 1848358 cri.go:89] found id: ""
	I1216 03:00:28.756339 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.756350 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:00:28.756355 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:00:28.756442 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:00:28.786013 1848358 cri.go:89] found id: ""
	I1216 03:00:28.786027 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.786033 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:00:28.786038 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:00:28.786099 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:00:28.813243 1848358 cri.go:89] found id: ""
	I1216 03:00:28.813257 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.813264 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:00:28.813269 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:00:28.813329 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:00:28.837627 1848358 cri.go:89] found id: ""
	I1216 03:00:28.837642 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.837649 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:00:28.837654 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:00:28.837714 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:00:28.862744 1848358 cri.go:89] found id: ""
	I1216 03:00:28.862768 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.862775 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:00:28.862780 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:00:28.862850 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:00:28.888763 1848358 cri.go:89] found id: ""
	I1216 03:00:28.888777 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.888784 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:00:28.888790 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:00:28.888851 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:00:28.913212 1848358 cri.go:89] found id: ""
	I1216 03:00:28.913226 1848358 logs.go:282] 0 containers: []
	W1216 03:00:28.913234 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:00:28.913242 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:00:28.913252 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:00:28.973937 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:00:28.973957 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:00:28.995906 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:00:28.995924 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:00:29.068971 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:00:29.060478   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.060883   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062455   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062780   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.064407   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:00:29.060478   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.060883   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062455   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.062780   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:00:29.064407   17076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:00:29.068980 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:00:29.068994 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:00:29.132688 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:00:29.132707 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:00:31.666915 1848358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:00:31.677125 1848358 kubeadm.go:602] duration metric: took 4m1.758576282s to restartPrimaryControlPlane
	W1216 03:00:31.677186 1848358 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:00:31.677266 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:00:32.091488 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:00:32.105369 1848358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:00:32.113490 1848358 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:00:32.113550 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:00:32.122054 1848358 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:00:32.122064 1848358 kubeadm.go:158] found existing configuration files:
	
	I1216 03:00:32.122120 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 03:00:32.130622 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:00:32.130682 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:00:32.138437 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 03:00:32.146797 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:00:32.146863 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:00:32.155178 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 03:00:32.163734 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:00:32.163795 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:00:32.171993 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 03:00:32.180028 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:00:32.180097 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:00:32.188091 1848358 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:00:32.228785 1848358 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:00:32.228977 1848358 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:00:32.306472 1848358 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:00:32.306542 1848358 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:00:32.306577 1848358 kubeadm.go:319] OS: Linux
	I1216 03:00:32.306630 1848358 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:00:32.306684 1848358 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:00:32.306730 1848358 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:00:32.306783 1848358 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:00:32.306837 1848358 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:00:32.306884 1848358 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:00:32.306934 1848358 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:00:32.306987 1848358 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:00:32.307033 1848358 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:00:32.370232 1848358 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:00:32.370342 1848358 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:00:32.370445 1848358 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:00:32.376940 1848358 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:00:32.380870 1848358 out.go:252]   - Generating certificates and keys ...
	I1216 03:00:32.380973 1848358 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:00:32.381073 1848358 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:00:32.381166 1848358 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:00:32.381227 1848358 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:00:32.381296 1848358 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:00:32.381349 1848358 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:00:32.381411 1848358 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:00:32.381496 1848358 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:00:32.381600 1848358 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:00:32.381683 1848358 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:00:32.381723 1848358 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:00:32.381783 1848358 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:00:32.587867 1848358 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:00:32.728887 1848358 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:00:33.127071 1848358 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:00:33.632583 1848358 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:00:33.851925 1848358 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:00:33.852650 1848358 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:00:33.855273 1848358 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:00:33.858613 1848358 out.go:252]   - Booting up control plane ...
	I1216 03:00:33.858712 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:00:33.858788 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:00:33.858854 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:00:33.878797 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:00:33.879802 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:00:33.887340 1848358 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:00:33.887615 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:00:33.887656 1848358 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:00:34.023686 1848358 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:00:34.027990 1848358 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:04:34.028846 1848358 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005338087s
	I1216 03:04:34.028875 1848358 kubeadm.go:319] 
	I1216 03:04:34.028931 1848358 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:04:34.028963 1848358 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:04:34.029067 1848358 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:04:34.029071 1848358 kubeadm.go:319] 
	I1216 03:04:34.029175 1848358 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:04:34.029206 1848358 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:04:34.029236 1848358 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:04:34.029239 1848358 kubeadm.go:319] 
	I1216 03:04:34.033654 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:04:34.034083 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:04:34.034191 1848358 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:04:34.034426 1848358 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 03:04:34.034431 1848358 kubeadm.go:319] 
	I1216 03:04:34.034499 1848358 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 03:04:34.034613 1848358 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005338087s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 03:04:34.034714 1848358 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:04:34.442103 1848358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:04:34.455899 1848358 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:04:34.455954 1848358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:04:34.464166 1848358 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:04:34.464176 1848358 kubeadm.go:158] found existing configuration files:
	
	I1216 03:04:34.464227 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 03:04:34.472141 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:04:34.472197 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:04:34.479703 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 03:04:34.487496 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:04:34.487553 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:04:34.495305 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 03:04:34.504218 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:04:34.504277 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:04:34.512085 1848358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 03:04:34.520037 1848358 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:04:34.520091 1848358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:04:34.527590 1848358 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:04:34.569546 1848358 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:04:34.569597 1848358 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:04:34.648580 1848358 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:04:34.648645 1848358 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:04:34.648680 1848358 kubeadm.go:319] OS: Linux
	I1216 03:04:34.648724 1848358 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:04:34.648775 1848358 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:04:34.648847 1848358 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:04:34.648894 1848358 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:04:34.648941 1848358 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:04:34.648988 1848358 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:04:34.649031 1848358 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:04:34.649078 1848358 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:04:34.649123 1848358 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:04:34.718553 1848358 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:04:34.718667 1848358 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:04:34.718765 1848358 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:04:34.725198 1848358 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:04:34.730521 1848358 out.go:252]   - Generating certificates and keys ...
	I1216 03:04:34.730604 1848358 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:04:34.730670 1848358 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:04:34.730745 1848358 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:04:34.730804 1848358 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:04:34.730873 1848358 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:04:34.730926 1848358 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:04:34.730988 1848358 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:04:34.731077 1848358 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:04:34.731151 1848358 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:04:34.731222 1848358 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:04:34.731258 1848358 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:04:34.731313 1848358 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:04:34.775823 1848358 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:04:35.226979 1848358 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:04:35.500835 1848358 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:04:35.803186 1848358 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:04:35.922858 1848358 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:04:35.923646 1848358 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:04:35.926392 1848358 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:04:35.929487 1848358 out.go:252]   - Booting up control plane ...
	I1216 03:04:35.929587 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:04:35.929670 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:04:35.930420 1848358 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:04:35.952397 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:04:35.952501 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:04:35.960726 1848358 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:04:35.961037 1848358 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:04:35.961210 1848358 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:04:36.110987 1848358 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:04:36.111155 1848358 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:08:36.111000 1848358 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000244075s
	I1216 03:08:36.111025 1848358 kubeadm.go:319] 
	I1216 03:08:36.111095 1848358 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:08:36.111126 1848358 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:08:36.111231 1848358 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:08:36.111235 1848358 kubeadm.go:319] 
	I1216 03:08:36.111337 1848358 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:08:36.111368 1848358 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:08:36.111397 1848358 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:08:36.111401 1848358 kubeadm.go:319] 
	I1216 03:08:36.115184 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:08:36.115598 1848358 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:08:36.115704 1848358 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:08:36.115939 1848358 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 03:08:36.115944 1848358 kubeadm.go:319] 
	I1216 03:08:36.116012 1848358 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 03:08:36.116067 1848358 kubeadm.go:403] duration metric: took 12m6.232765178s to StartCluster
	I1216 03:08:36.116112 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:08:36.116177 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:08:36.140414 1848358 cri.go:89] found id: ""
	I1216 03:08:36.140430 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.140437 1848358 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:08:36.140442 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:08:36.140504 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:08:36.164577 1848358 cri.go:89] found id: ""
	I1216 03:08:36.164590 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.164598 1848358 logs.go:284] No container was found matching "etcd"
	I1216 03:08:36.164604 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:08:36.164663 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:08:36.188307 1848358 cri.go:89] found id: ""
	I1216 03:08:36.188321 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.188328 1848358 logs.go:284] No container was found matching "coredns"
	I1216 03:08:36.188333 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:08:36.188394 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:08:36.213037 1848358 cri.go:89] found id: ""
	I1216 03:08:36.213050 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.213057 1848358 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:08:36.213062 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:08:36.213121 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:08:36.239675 1848358 cri.go:89] found id: ""
	I1216 03:08:36.239690 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.239698 1848358 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:08:36.239704 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:08:36.239762 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:08:36.262932 1848358 cri.go:89] found id: ""
	I1216 03:08:36.262947 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.262955 1848358 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:08:36.262960 1848358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:08:36.263018 1848358 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:08:36.288318 1848358 cri.go:89] found id: ""
	I1216 03:08:36.288332 1848358 logs.go:282] 0 containers: []
	W1216 03:08:36.288340 1848358 logs.go:284] No container was found matching "kindnet"
	I1216 03:08:36.288349 1848358 logs.go:123] Gathering logs for containerd ...
	I1216 03:08:36.288358 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:08:36.350247 1848358 logs.go:123] Gathering logs for container status ...
	I1216 03:08:36.350267 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:08:36.380644 1848358 logs.go:123] Gathering logs for kubelet ...
	I1216 03:08:36.380660 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:08:36.436449 1848358 logs.go:123] Gathering logs for dmesg ...
	I1216 03:08:36.436466 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:08:36.457199 1848358 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:08:36.457222 1848358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:08:36.526010 1848358 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:08:36.517899   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.518716   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520309   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520628   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.522143   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 03:08:36.517899   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.518716   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520309   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.520628   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:08:36.522143   20886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1216 03:08:36.526029 1848358 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 03:08:36.526065 1848358 out.go:285] * 
	W1216 03:08:36.526124 1848358 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:08:36.526137 1848358 out.go:285] * 
	W1216 03:08:36.528271 1848358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:08:36.533177 1848358 out.go:203] 
	W1216 03:08:36.537050 1848358 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000244075s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:08:36.537112 1848358 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 03:08:36.537136 1848358 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 03:08:36.540537 1848358 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418983774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418998239Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419036154Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419097175Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419108202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419119509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419128805Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419140062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419155980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419187823Z" level=info msg="Connect containerd service"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419497668Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.420076931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439480285Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439840672Z" level=info msg="Start recovering state"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439686821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.443248018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513022632Z" level=info msg="Start event monitor"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513204659Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513279259Z" level=info msg="Start streaming server"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513342856Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513405935Z" level=info msg="runtime interface starting up..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513471920Z" level=info msg="starting plugins..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513539119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:56:28 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.516797790Z" level=info msg="containerd successfully booted in 0.120064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:10:53.031229   22531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:10:53.031954   22531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:10:53.033612   22531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:10:53.033950   22531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:10:53.035707   22531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 03:10:53 up  8:53,  0 user,  load average: 0.16, 0.26, 0.49
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 03:10:49 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:10:50 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 16 03:10:50 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:10:50 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:10:50 functional-389759 kubelet[22415]: E1216 03:10:50.741716   22415 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:10:50 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:10:50 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:10:51 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 16 03:10:51 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:10:51 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:10:51 functional-389759 kubelet[22420]: E1216 03:10:51.479418   22420 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:10:51 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:10:51 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:10:52 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 501.
	Dec 16 03:10:52 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:10:52 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:10:52 functional-389759 kubelet[22439]: E1216 03:10:52.174401   22439 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:10:52 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:10:52 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:10:52 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 502.
	Dec 16 03:10:52 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:10:52 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:10:52 functional-389759 kubelet[22523]: E1216 03:10:52.993528   22523 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:10:52 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:10:52 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (381.050854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1216 03:08:51.133563 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 03:08:54.965198 1798370 retry.go:31] will retry after 2.690702594s: Temporary Error: Get "http://10.100.132.70": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 03:09:07.656866 1798370 retry.go:31] will retry after 4.311134488s: Temporary Error: Get "http://10.100.132.70": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 03:09:21.969092 1798370 retry.go:31] will retry after 3.531613222s: Temporary Error: Get "http://10.100.132.70": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 03:09:35.502292 1798370 retry.go:31] will retry after 7.222179802s: Temporary Error: Get "http://10.100.132.70": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 03:09:52.725790 1798370 retry.go:31] will retry after 16.132033456s: Temporary Error: Get "http://10.100.132.70": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 03:10:18.858996 1798370 retry.go:31] will retry after 22.455755813s: Temporary Error: Get "http://10.100.132.70": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1216 03:10:51.130872 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1216 03:11:54.214326 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (322.349193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (301.351937ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-389759 image load --daemon kicbase/echo-server:functional-389759 --alsologtostderr                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image ls                                                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image save kicbase/echo-server:functional-389759 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image rm kicbase/echo-server:functional-389759 --alsologtostderr                                                                              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image ls                                                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image ls                                                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image save --daemon kicbase/echo-server:functional-389759 --alsologtostderr                                                                   │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh            │ functional-389759 ssh sudo cat /etc/test/nested/copy/1798370/hosts                                                                                              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh            │ functional-389759 ssh sudo cat /etc/ssl/certs/1798370.pem                                                                                                       │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh            │ functional-389759 ssh sudo cat /usr/share/ca-certificates/1798370.pem                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh            │ functional-389759 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh            │ functional-389759 ssh sudo cat /etc/ssl/certs/17983702.pem                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh            │ functional-389759 ssh sudo cat /usr/share/ca-certificates/17983702.pem                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh            │ functional-389759 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image ls --format short --alsologtostderr                                                                                                     │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ update-context │ functional-389759 update-context --alsologtostderr -v=2                                                                                                         │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh            │ functional-389759 ssh pgrep buildkitd                                                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ image          │ functional-389759 image build -t localhost/my-image:functional-389759 testdata/build --alsologtostderr                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image ls                                                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image ls --format yaml --alsologtostderr                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image ls --format json --alsologtostderr                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ image          │ functional-389759 image ls --format table --alsologtostderr                                                                                                     │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ update-context │ functional-389759 update-context --alsologtostderr -v=2                                                                                                         │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ update-context │ functional-389759 update-context --alsologtostderr -v=2                                                                                                         │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:11:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:11:08.795955 1865782 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:11:08.796085 1865782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.796091 1865782 out.go:374] Setting ErrFile to fd 2...
	I1216 03:11:08.796094 1865782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.796367 1865782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:11:08.796716 1865782 out.go:368] Setting JSON to false
	I1216 03:11:08.797563 1865782 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":32013,"bootTime":1765822656,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:11:08.797628 1865782 start.go:143] virtualization:  
	I1216 03:11:08.800934 1865782 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 03:11:08.804685 1865782 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:11:08.804777 1865782 notify.go:221] Checking for updates...
	I1216 03:11:08.810552 1865782 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:11:08.813422 1865782 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:11:08.816394 1865782 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:11:08.819266 1865782 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:11:08.822153 1865782 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:11:08.825481 1865782 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 03:11:08.826103 1865782 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:11:08.847078 1865782 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 03:11:08.847214 1865782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:11:08.902895 1865782 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:08.893712123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:11:08.903009 1865782 docker.go:319] overlay module found
	I1216 03:11:08.906147 1865782 out.go:179] * Using the docker driver based on existing profile
	I1216 03:11:08.908914 1865782 start.go:309] selected driver: docker
	I1216 03:11:08.908934 1865782 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:11:08.909031 1865782 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:11:08.909157 1865782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:11:08.974996 1865782 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:08.964969121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:11:08.975521 1865782 cni.go:84] Creating CNI manager for ""
	I1216 03:11:08.975583 1865782 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 03:11:08.975624 1865782 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:11:08.980501 1865782 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 03:11:15 functional-389759 containerd[9700]: time="2025-12-16T03:11:15.189493419Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:11:15 functional-389759 containerd[9700]: time="2025-12-16T03:11:15.190099371Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-389759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:11:16 functional-389759 containerd[9700]: time="2025-12-16T03:11:16.231004295Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-389759\""
	Dec 16 03:11:16 functional-389759 containerd[9700]: time="2025-12-16T03:11:16.233775394Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-389759\""
	Dec 16 03:11:16 functional-389759 containerd[9700]: time="2025-12-16T03:11:16.237002835Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 16 03:11:16 functional-389759 containerd[9700]: time="2025-12-16T03:11:16.246870163Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-389759\" returns successfully"
	Dec 16 03:11:16 functional-389759 containerd[9700]: time="2025-12-16T03:11:16.500083957Z" level=info msg="No images store for sha256:b787696a6c58441bc537b3397714c7b5a2b86c7400b118b1e5fd808d9f4c23f9"
	Dec 16 03:11:16 functional-389759 containerd[9700]: time="2025-12-16T03:11:16.502266013Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-389759\""
	Dec 16 03:11:16 functional-389759 containerd[9700]: time="2025-12-16T03:11:16.509502852Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:11:16 functional-389759 containerd[9700]: time="2025-12-16T03:11:16.509824371Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-389759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:11:17 functional-389759 containerd[9700]: time="2025-12-16T03:11:17.310563624Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-389759\""
	Dec 16 03:11:17 functional-389759 containerd[9700]: time="2025-12-16T03:11:17.312999542Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-389759\""
	Dec 16 03:11:17 functional-389759 containerd[9700]: time="2025-12-16T03:11:17.315063480Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 16 03:11:17 functional-389759 containerd[9700]: time="2025-12-16T03:11:17.324649625Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-389759\" returns successfully"
	Dec 16 03:11:17 functional-389759 containerd[9700]: time="2025-12-16T03:11:17.975430148Z" level=info msg="No images store for sha256:6b22cd3b4da416f88809256ddbbcf7f61119bf0a025376010076e19733240e56"
	Dec 16 03:11:17 functional-389759 containerd[9700]: time="2025-12-16T03:11:17.977642170Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-389759\""
	Dec 16 03:11:17 functional-389759 containerd[9700]: time="2025-12-16T03:11:17.985604686Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:11:17 functional-389759 containerd[9700]: time="2025-12-16T03:11:17.986079162Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-389759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:11:24 functional-389759 containerd[9700]: time="2025-12-16T03:11:24.086466178Z" level=info msg="connecting to shim re96p6nk0z84l3s7572w262e4" address="unix:///run/containerd/s/d33112d67a9baf38600b3fd365db916a0df1ba7ff5296fe2b7dd61a22676b54e" namespace=k8s.io protocol=ttrpc version=3
	Dec 16 03:11:24 functional-389759 containerd[9700]: time="2025-12-16T03:11:24.165874765Z" level=info msg="shim disconnected" id=re96p6nk0z84l3s7572w262e4 namespace=k8s.io
	Dec 16 03:11:24 functional-389759 containerd[9700]: time="2025-12-16T03:11:24.166690789Z" level=info msg="cleaning up after shim disconnected" id=re96p6nk0z84l3s7572w262e4 namespace=k8s.io
	Dec 16 03:11:24 functional-389759 containerd[9700]: time="2025-12-16T03:11:24.166734152Z" level=info msg="cleaning up dead shim" id=re96p6nk0z84l3s7572w262e4 namespace=k8s.io
	Dec 16 03:11:24 functional-389759 containerd[9700]: time="2025-12-16T03:11:24.449488768Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-389759\""
	Dec 16 03:11:24 functional-389759 containerd[9700]: time="2025-12-16T03:11:24.456924116Z" level=info msg="ImageCreate event name:\"sha256:f6b90553d47705cc6fc14f052bdd590c7f5e52677cc9aef378193ce6df39e196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:11:24 functional-389759 containerd[9700]: time="2025-12-16T03:11:24.457564069Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-389759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:12:46.698507   25084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:12:46.699239   25084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:12:46.701006   25084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:12:46.701593   25084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:12:46.703111   25084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 03:12:46 up  8:55,  0 user,  load average: 0.36, 0.41, 0.52
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 03:12:43 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:12:43 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 16 03:12:43 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:12:43 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:12:43 functional-389759 kubelet[24956]: E1216 03:12:43.984958   24956 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:12:43 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:12:43 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:12:44 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 16 03:12:44 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:12:44 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:12:44 functional-389759 kubelet[24961]: E1216 03:12:44.733186   24961 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:12:44 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:12:44 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:12:45 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 16 03:12:45 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:12:45 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:12:45 functional-389759 kubelet[24966]: E1216 03:12:45.503477   24966 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:12:45 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:12:45 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:12:46 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 16 03:12:46 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:12:46 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:12:46 functional-389759 kubelet[25002]: E1216 03:12:46.255634   25002 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:12:46 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:12:46 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (309.7098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-389759 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-389759 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (59.724287ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-389759 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	        "Created": "2025-12-16T02:41:46.85492681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1837192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T02:41:46.915844066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
	        "HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
	        "LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
	        "Name": "/functional-389759",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389759:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389759",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
	                "LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389759",
	                "Source": "/var/lib/docker/volumes/functional-389759/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389759",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389759",
	                "name.minikube.sigs.k8s.io": "functional-389759",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
	            "SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389759": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:6a:fd:73:00:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
	                    "EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389759",
	                        "23b85b27a161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 2 (299.125569ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-389759 service hello-node --url                                                                                                          │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001:/mount-9p --alsologtostderr -v=1              │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:10 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh -- ls -la /mount-9p                                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh cat /mount-9p/test-1765854658954368425                                                                                        │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh       │ functional-389759 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2071099183/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh -- ls -la /mount-9p                                                                                                           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount1 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount1                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount2 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ mount     │ -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount3 --alsologtostderr -v=1                │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ ssh       │ functional-389759 ssh findmnt -T /mount1                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh findmnt -T /mount2                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh       │ functional-389759 ssh findmnt -T /mount3                                                                                                            │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ mount     │ -p functional-389759 --kill=true                                                                                                                    │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ start     │ -p functional-389759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ start     │ -p functional-389759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ start     │ -p functional-389759 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-389759 --alsologtostderr -v=1                                                                                      │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:11:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:11:08.795955 1865782 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:11:08.796085 1865782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.796091 1865782 out.go:374] Setting ErrFile to fd 2...
	I1216 03:11:08.796094 1865782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.796367 1865782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:11:08.796716 1865782 out.go:368] Setting JSON to false
	I1216 03:11:08.797563 1865782 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":32013,"bootTime":1765822656,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:11:08.797628 1865782 start.go:143] virtualization:  
	I1216 03:11:08.800934 1865782 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 03:11:08.804685 1865782 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:11:08.804777 1865782 notify.go:221] Checking for updates...
	I1216 03:11:08.810552 1865782 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:11:08.813422 1865782 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:11:08.816394 1865782 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:11:08.819266 1865782 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:11:08.822153 1865782 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:11:08.825481 1865782 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 03:11:08.826103 1865782 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:11:08.847078 1865782 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 03:11:08.847214 1865782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:11:08.902895 1865782 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:08.893712123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:11:08.903009 1865782 docker.go:319] overlay module found
	I1216 03:11:08.906147 1865782 out.go:179] * Using the docker driver based on existing profile
	I1216 03:11:08.908914 1865782 start.go:309] selected driver: docker
	I1216 03:11:08.908934 1865782 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:11:08.909031 1865782 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:11:08.909157 1865782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:11:08.974996 1865782 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:08.964969121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:11:08.975521 1865782 cni.go:84] Creating CNI manager for ""
	I1216 03:11:08.975583 1865782 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 03:11:08.975624 1865782 start.go:353] cluster config:
	{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:11:08.980501 1865782 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418983774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.418998239Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419036154Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419097175Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419108202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419119509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419128805Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419140062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419155980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419187823Z" level=info msg="Connect containerd service"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.419497668Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.420076931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439480285Z" level=info msg="Start subscribing containerd event"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439840672Z" level=info msg="Start recovering state"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.439686821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.443248018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513022632Z" level=info msg="Start event monitor"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513204659Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513279259Z" level=info msg="Start streaming server"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513342856Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513405935Z" level=info msg="runtime interface starting up..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513471920Z" level=info msg="starting plugins..."
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.513539119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 02:56:28 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 02:56:28 functional-389759 containerd[9700]: time="2025-12-16T02:56:28.516797790Z" level=info msg="containerd successfully booted in 0.120064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 03:11:11.675328   23526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:11.675797   23526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:11.677426   23526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:11.678092   23526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 03:11:11.679111   23526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 03:11:11 up  8:53,  0 user,  load average: 1.25, 0.50, 0.56
	Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 03:11:08 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:09 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 524.
	Dec 16 03:11:09 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:09 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:09 functional-389759 kubelet[23296]: E1216 03:11:09.519729   23296 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:09 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:09 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 525.
	Dec 16 03:11:10 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:10 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:10 functional-389759 kubelet[23391]: E1216 03:11:10.244067   23391 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 526.
	Dec 16 03:11:10 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:10 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:10 functional-389759 kubelet[23430]: E1216 03:11:10.989535   23430 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:10 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:11:11 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 527.
	Dec 16 03:11:11 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:11 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:11:11 functional-389759 kubelet[23530]: E1216 03:11:11.747784   23530 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:11:11 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:11:11 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 2 (315.693132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1216 03:08:44.375101 1861311 out.go:360] Setting OutFile to fd 1 ...
I1216 03:08:44.375300 1861311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:08:44.375330 1861311 out.go:374] Setting ErrFile to fd 2...
I1216 03:08:44.375352 1861311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:08:44.375632 1861311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 03:08:44.375966 1861311 mustload.go:66] Loading cluster: functional-389759
I1216 03:08:44.376455 1861311 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:08:44.377008 1861311 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 03:08:44.408717 1861311 host.go:66] Checking if "functional-389759" exists ...
I1216 03:08:44.409035 1861311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 03:08:44.546694 1861311 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:08:44.535535257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 03:08:44.546906 1861311 api_server.go:166] Checking apiserver status ...
I1216 03:08:44.546988 1861311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 03:08:44.547036 1861311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 03:08:44.590260 1861311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
W1216 03:08:44.710162 1861311 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1216 03:08:44.713674 1861311 out.go:179] * The control-plane node functional-389759 apiserver is not running: (state=Stopped)
I1216 03:08:44.716782 1861311 out.go:179]   To start a cluster, run: "minikube start -p functional-389759"

                                                
                                                
stdout: * The control-plane node functional-389759 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-389759"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1861310: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-389759 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-389759 apply -f testdata/testsvc.yaml: exit status 1 (118.680215ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-389759 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (126.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.100.132.70": Temporary Error: Get "http://10.100.132.70": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-389759 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-389759 get svc nginx-svc: exit status 1 (64.782343ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-389759 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (126.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-389759 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-389759 create deployment hello-node --image kicbase/echo-server: exit status 1 (60.43189ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-389759 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 service list: exit status 103 (253.165663ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389759 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389759"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-389759 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-389759 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-389759\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 service list -o json: exit status 103 (264.045409ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389759 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389759"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-389759 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 service --namespace=default --https --url hello-node: exit status 103 (254.461455ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389759 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389759"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-389759 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 service hello-node --url --format={{.IP}}: exit status 103 (281.618357ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389759 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389759"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-389759 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-389759 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-389759\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 service hello-node --url: exit status 103 (291.710541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389759 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389759"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-389759 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-389759 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-389759"
functional_test.go:1579: failed to parse "* The control-plane node functional-389759 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-389759\"": parse "* The control-plane node functional-389759 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-389759\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765854658954368425" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765854658954368425" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765854658954368425" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001/test-1765854658954368425
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.439523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 03:10:59.319114 1798370 retry.go:31] will retry after 470.347804ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 03:10 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 03:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 03:10 test-1765854658954368425
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh cat /mount-9p/test-1765854658954368425
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-389759 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-389759 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (57.705476ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-389759 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (269.48989ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=46809)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 16 03:10 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 16 03:10 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 16 03:10 test-1765854658954368425
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-389759 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:46809
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001:/mount-9p --alsologtostderr -v=1] stderr:
I1216 03:10:59.047110 1863834 out.go:360] Setting OutFile to fd 1 ...
I1216 03:10:59.047869 1863834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:10:59.047881 1863834 out.go:374] Setting ErrFile to fd 2...
I1216 03:10:59.047887 1863834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:10:59.048259 1863834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 03:10:59.048574 1863834 mustload.go:66] Loading cluster: functional-389759
I1216 03:10:59.049184 1863834 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:10:59.049908 1863834 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 03:10:59.071007 1863834 host.go:66] Checking if "functional-389759" exists ...
I1216 03:10:59.071329 1863834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 03:10:59.163441 1863834 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:10:59.150410054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 03:10:59.163587 1863834 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 03:10:59.188628 1863834 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001 into VM as /mount-9p ...
I1216 03:10:59.191688 1863834 out.go:179]   - Mount type:   9p
I1216 03:10:59.194476 1863834 out.go:179]   - User ID:      docker
I1216 03:10:59.197356 1863834 out.go:179]   - Group ID:     docker
I1216 03:10:59.200183 1863834 out.go:179]   - Version:      9p2000.L
I1216 03:10:59.203271 1863834 out.go:179]   - Message Size: 262144
I1216 03:10:59.206389 1863834 out.go:179]   - Options:      map[]
I1216 03:10:59.209275 1863834 out.go:179]   - Bind Address: 192.168.49.1:46809
I1216 03:10:59.212602 1863834 out.go:179] * Userspace file server: 
I1216 03:10:59.212882 1863834 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1216 03:10:59.213013 1863834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 03:10:59.237804 1863834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 03:10:59.333963 1863834 mount.go:180] unmount for /mount-9p ran successfully
I1216 03:10:59.333994 1863834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1216 03:10:59.342907 1863834 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46809,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1216 03:10:59.354877 1863834 main.go:127] stdlog: ufs.go:141 connected
I1216 03:10:59.355138 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tversion tag 65535 msize 262144 version '9P2000.L'
I1216 03:10:59.355197 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rversion tag 65535 msize 262144 version '9P2000'
I1216 03:10:59.355550 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1216 03:10:59.355620 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rattach tag 0 aqid (c9d8ae 2523c185 'd')
I1216 03:10:59.356350 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 0
I1216 03:10:59.356408 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d8ae 2523c185 'd') m d775 at 0 mt 1765854658 l 4096 t 0 d 0 ext )
I1216 03:10:59.361914 1863834 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/.mount-process: {Name:mk0fbf818caf62d5f5d4da765d2cc88b83e4b564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 03:10:59.362182 1863834 mount.go:105] mount successful: ""
I1216 03:10:59.365767 1863834 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2828107160/001 to /mount-9p
I1216 03:10:59.368614 1863834 out.go:203] 
I1216 03:10:59.371462 1863834 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1216 03:11:00.444063 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 0
I1216 03:11:00.444152 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d8ae 2523c185 'd') m d775 at 0 mt 1765854658 l 4096 t 0 d 0 ext )
I1216 03:11:00.444586 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 1 
I1216 03:11:00.444640 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 
I1216 03:11:00.444796 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Topen tag 0 fid 1 mode 0
I1216 03:11:00.444882 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Ropen tag 0 qid (c9d8ae 2523c185 'd') iounit 0
I1216 03:11:00.445064 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 0
I1216 03:11:00.445112 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d8ae 2523c185 'd') m d775 at 0 mt 1765854658 l 4096 t 0 d 0 ext )
I1216 03:11:00.445328 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 1 offset 0 count 262120
I1216 03:11:00.445465 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 258
I1216 03:11:00.445620 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 1 offset 258 count 261862
I1216 03:11:00.445661 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 0
I1216 03:11:00.445816 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 1 offset 258 count 262120
I1216 03:11:00.445846 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 0
I1216 03:11:00.445999 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1216 03:11:00.446046 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 (c9d8af 2523c185 '') 
I1216 03:11:00.446194 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:00.446231 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d8af 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:00.446375 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:00.446416 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d8af 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:00.446567 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 2
I1216 03:11:00.446593 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:00.446737 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 2 0:'test-1765854658954368425' 
I1216 03:11:00.446775 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 (c9d8b1 2523c185 '') 
I1216 03:11:00.447802 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:00.447861 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('test-1765854658954368425' 'jenkins' 'jenkins' '' q (c9d8b1 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:00.448011 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:00.448077 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('test-1765854658954368425' 'jenkins' 'jenkins' '' q (c9d8b1 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:00.448343 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 2
I1216 03:11:00.448392 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:00.448612 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1216 03:11:00.448662 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 (c9d8b0 2523c185 '') 
I1216 03:11:00.448795 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:00.448829 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d8b0 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:00.448961 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:00.448995 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d8b0 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:00.449131 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 2
I1216 03:11:00.449153 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:00.449282 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 1 offset 258 count 262120
I1216 03:11:00.449310 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 0
I1216 03:11:00.449476 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 1
I1216 03:11:00.449505 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:00.731319 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 1 0:'test-1765854658954368425' 
I1216 03:11:00.731405 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 (c9d8b1 2523c185 '') 
I1216 03:11:00.731593 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 1
I1216 03:11:00.731643 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('test-1765854658954368425' 'jenkins' 'jenkins' '' q (c9d8b1 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:00.731809 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 1 newfid 2 
I1216 03:11:00.731833 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 
I1216 03:11:00.731968 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Topen tag 0 fid 2 mode 0
I1216 03:11:00.732014 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Ropen tag 0 qid (c9d8b1 2523c185 '') iounit 0
I1216 03:11:00.732133 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 1
I1216 03:11:00.732169 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('test-1765854658954368425' 'jenkins' 'jenkins' '' q (c9d8b1 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:00.732387 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 2 offset 0 count 262120
I1216 03:11:00.732468 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 24
I1216 03:11:00.732584 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 2 offset 24 count 262120
I1216 03:11:00.732619 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 0
I1216 03:11:00.732771 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 2 offset 24 count 262120
I1216 03:11:00.732819 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 0
I1216 03:11:00.733053 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 2
I1216 03:11:00.733096 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:00.733270 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 1
I1216 03:11:00.733297 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:01.062937 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 0
I1216 03:11:01.063015 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d8ae 2523c185 'd') m d775 at 0 mt 1765854658 l 4096 t 0 d 0 ext )
I1216 03:11:01.063485 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 1 
I1216 03:11:01.063530 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 
I1216 03:11:01.063671 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Topen tag 0 fid 1 mode 0
I1216 03:11:01.063724 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Ropen tag 0 qid (c9d8ae 2523c185 'd') iounit 0
I1216 03:11:01.063869 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 0
I1216 03:11:01.063905 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d8ae 2523c185 'd') m d775 at 0 mt 1765854658 l 4096 t 0 d 0 ext )
I1216 03:11:01.064057 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 1 offset 0 count 262120
I1216 03:11:01.064166 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 258
I1216 03:11:01.064311 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 1 offset 258 count 261862
I1216 03:11:01.064342 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 0
I1216 03:11:01.064470 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 1 offset 258 count 262120
I1216 03:11:01.064497 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 0
I1216 03:11:01.064646 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1216 03:11:01.064681 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 (c9d8af 2523c185 '') 
I1216 03:11:01.064809 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:01.064844 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d8af 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:01.064983 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:01.065019 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d8af 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:01.065145 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 2
I1216 03:11:01.065173 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:01.065317 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 2 0:'test-1765854658954368425' 
I1216 03:11:01.065347 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 (c9d8b1 2523c185 '') 
I1216 03:11:01.065485 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:01.065518 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('test-1765854658954368425' 'jenkins' 'jenkins' '' q (c9d8b1 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:01.065645 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:01.065683 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('test-1765854658954368425' 'jenkins' 'jenkins' '' q (c9d8b1 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:01.065803 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 2
I1216 03:11:01.065824 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:01.065968 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1216 03:11:01.066000 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rwalk tag 0 (c9d8b0 2523c185 '') 
I1216 03:11:01.066119 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:01.066155 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d8b0 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:01.066296 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tstat tag 0 fid 2
I1216 03:11:01.066330 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d8b0 2523c185 '') m 644 at 0 mt 1765854658 l 24 t 0 d 0 ext )
I1216 03:11:01.066454 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 2
I1216 03:11:01.066473 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:01.066611 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tread tag 0 fid 1 offset 258 count 262120
I1216 03:11:01.066638 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rread tag 0 count 0
I1216 03:11:01.066785 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 1
I1216 03:11:01.066817 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:01.068177 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1216 03:11:01.068265 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rerror tag 0 ename 'file not found' ecode 0
I1216 03:11:01.378120 1863834 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60152 Tclunk tag 0 fid 0
I1216 03:11:01.378180 1863834 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60152 Rclunk tag 0
I1216 03:11:01.379219 1863834 main.go:127] stdlog: ufs.go:147 disconnected
I1216 03:11:01.401712 1863834 out.go:179] * Unmounting /mount-9p ...
I1216 03:11:01.404789 1863834 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1216 03:11:01.412303 1863834 mount.go:180] unmount for /mount-9p ran successfully
I1216 03:11:01.412429 1863834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/.mount-process: {Name:mk0fbf818caf62d5f5d4da765d2cc88b83e4b564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 03:11:01.415636 1863834 out.go:203] 
W1216 03:11:01.418656 1863834 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1216 03:11:01.421537 1863834 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (800.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-271074 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1216 03:40:51.130833 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-271074 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.12851038s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-271074
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-271074: (1.3524949s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-271074 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-271074 status --format={{.Host}}: exit status 7 (93.514918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-271074 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-271074 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m34.018043076s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-271074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-271074" primary control-plane node in "kubernetes-upgrade-271074" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:40:55.202620 1995776 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:40:55.202756 1995776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:40:55.202766 1995776 out.go:374] Setting ErrFile to fd 2...
	I1216 03:40:55.202772 1995776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:40:55.203024 1995776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:40:55.203469 1995776 out.go:368] Setting JSON to false
	I1216 03:40:55.204491 1995776 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":33800,"bootTime":1765822656,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:40:55.204567 1995776 start.go:143] virtualization:  
	I1216 03:40:55.207836 1995776 out.go:179] * [kubernetes-upgrade-271074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 03:40:55.211736 1995776 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:40:55.211854 1995776 notify.go:221] Checking for updates...
	I1216 03:40:55.217660 1995776 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:40:55.220697 1995776 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:40:55.223784 1995776 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:40:55.226765 1995776 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:40:55.229636 1995776 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:40:55.232929 1995776 config.go:182] Loaded profile config "kubernetes-upgrade-271074": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1216 03:40:55.233532 1995776 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:40:55.264407 1995776 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 03:40:55.264545 1995776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:40:55.324864 1995776 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:40:55.314470473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:40:55.324977 1995776 docker.go:319] overlay module found
	I1216 03:40:55.328134 1995776 out.go:179] * Using the docker driver based on existing profile
	I1216 03:40:55.330965 1995776 start.go:309] selected driver: docker
	I1216 03:40:55.330988 1995776 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-271074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-271074 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:40:55.331212 1995776 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:40:55.331942 1995776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:40:55.387628 1995776 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:40:55.378033242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:40:55.387968 1995776 cni.go:84] Creating CNI manager for ""
	I1216 03:40:55.388042 1995776 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 03:40:55.388083 1995776 start.go:353] cluster config:
	{Name:kubernetes-upgrade-271074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-271074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:40:55.391267 1995776 out.go:179] * Starting "kubernetes-upgrade-271074" primary control-plane node in "kubernetes-upgrade-271074" cluster
	I1216 03:40:55.394170 1995776 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 03:40:55.397286 1995776 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:40:55.400314 1995776 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 03:40:55.400371 1995776 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 03:40:55.400387 1995776 cache.go:65] Caching tarball of preloaded images
	I1216 03:40:55.400446 1995776 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:40:55.400473 1995776 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 03:40:55.400484 1995776 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 03:40:55.400592 1995776 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/config.json ...
	I1216 03:40:55.419805 1995776 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:40:55.419827 1995776 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:40:55.419842 1995776 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:40:55.419872 1995776 start.go:360] acquireMachinesLock for kubernetes-upgrade-271074: {Name:mk5fb1ec8bb84359dfa80e5ff7b56fdc52055db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:40:55.419934 1995776 start.go:364] duration metric: took 36.627µs to acquireMachinesLock for "kubernetes-upgrade-271074"
	I1216 03:40:55.419960 1995776 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:40:55.419966 1995776 fix.go:54] fixHost starting: 
	I1216 03:40:55.420238 1995776 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-271074 --format={{.State.Status}}
	I1216 03:40:55.436884 1995776 fix.go:112] recreateIfNeeded on kubernetes-upgrade-271074: state=Stopped err=<nil>
	W1216 03:40:55.436917 1995776 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:40:55.440102 1995776 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-271074" ...
	I1216 03:40:55.440184 1995776 cli_runner.go:164] Run: docker start kubernetes-upgrade-271074
	I1216 03:40:55.690776 1995776 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-271074 --format={{.State.Status}}
	I1216 03:40:55.711494 1995776 kic.go:430] container "kubernetes-upgrade-271074" state is running.
	I1216 03:40:55.711912 1995776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-271074
	I1216 03:40:55.735461 1995776 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/config.json ...
	I1216 03:40:55.735695 1995776 machine.go:94] provisionDockerMachine start ...
	I1216 03:40:55.735755 1995776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-271074
	I1216 03:40:55.758527 1995776 main.go:143] libmachine: Using SSH client type: native
	I1216 03:40:55.758859 1995776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1216 03:40:55.758868 1995776 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:40:55.759581 1995776 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 03:40:58.894511 1995776 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-271074
	
	I1216 03:40:58.894537 1995776 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-271074"
	I1216 03:40:58.894650 1995776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-271074
	I1216 03:40:58.911671 1995776 main.go:143] libmachine: Using SSH client type: native
	I1216 03:40:58.911987 1995776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1216 03:40:58.912010 1995776 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-271074 && echo "kubernetes-upgrade-271074" | sudo tee /etc/hostname
	I1216 03:40:59.057402 1995776 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-271074
	
	I1216 03:40:59.057486 1995776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-271074
	I1216 03:40:59.075895 1995776 main.go:143] libmachine: Using SSH client type: native
	I1216 03:40:59.076240 1995776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1216 03:40:59.076262 1995776 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-271074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-271074/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-271074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:40:59.211213 1995776 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:40:59.211309 1995776 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 03:40:59.211353 1995776 ubuntu.go:190] setting up certificates
	I1216 03:40:59.211385 1995776 provision.go:84] configureAuth start
	I1216 03:40:59.211478 1995776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-271074
	I1216 03:40:59.229380 1995776 provision.go:143] copyHostCerts
	I1216 03:40:59.229460 1995776 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 03:40:59.229474 1995776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 03:40:59.229562 1995776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 03:40:59.229668 1995776 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 03:40:59.229679 1995776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 03:40:59.229707 1995776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 03:40:59.229773 1995776 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 03:40:59.229782 1995776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 03:40:59.229808 1995776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 03:40:59.229862 1995776 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-271074 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-271074 localhost minikube]
	I1216 03:40:59.474464 1995776 provision.go:177] copyRemoteCerts
	I1216 03:40:59.474535 1995776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:40:59.474575 1995776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-271074
	I1216 03:40:59.493864 1995776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/kubernetes-upgrade-271074/id_rsa Username:docker}
	I1216 03:40:59.590825 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 03:40:59.610309 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 03:40:59.628951 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:40:59.647568 1995776 provision.go:87] duration metric: took 436.164237ms to configureAuth
	I1216 03:40:59.647595 1995776 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:40:59.647789 1995776 config.go:182] Loaded profile config "kubernetes-upgrade-271074": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 03:40:59.647805 1995776 machine.go:97] duration metric: took 3.91210059s to provisionDockerMachine
	I1216 03:40:59.647814 1995776 start.go:293] postStartSetup for "kubernetes-upgrade-271074" (driver="docker")
	I1216 03:40:59.647827 1995776 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:40:59.647889 1995776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:40:59.647935 1995776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-271074
	I1216 03:40:59.665319 1995776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/kubernetes-upgrade-271074/id_rsa Username:docker}
	I1216 03:40:59.763205 1995776 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:40:59.766889 1995776 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:40:59.766928 1995776 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:40:59.766957 1995776 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 03:40:59.767036 1995776 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 03:40:59.767169 1995776 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 03:40:59.767282 1995776 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:40:59.774993 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 03:40:59.793696 1995776 start.go:296] duration metric: took 145.865576ms for postStartSetup
	I1216 03:40:59.793778 1995776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:40:59.793816 1995776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-271074
	I1216 03:40:59.813925 1995776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/kubernetes-upgrade-271074/id_rsa Username:docker}
	I1216 03:40:59.908845 1995776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:40:59.913523 1995776 fix.go:56] duration metric: took 4.493549148s for fixHost
	I1216 03:40:59.913550 1995776 start.go:83] releasing machines lock for "kubernetes-upgrade-271074", held for 4.493602291s
	I1216 03:40:59.913649 1995776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-271074
	I1216 03:40:59.931181 1995776 ssh_runner.go:195] Run: cat /version.json
	I1216 03:40:59.931228 1995776 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:40:59.931245 1995776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-271074
	I1216 03:40:59.931283 1995776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-271074
	I1216 03:40:59.952645 1995776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/kubernetes-upgrade-271074/id_rsa Username:docker}
	I1216 03:40:59.960771 1995776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/kubernetes-upgrade-271074/id_rsa Username:docker}
	I1216 03:41:00.093711 1995776 ssh_runner.go:195] Run: systemctl --version
	I1216 03:41:00.312694 1995776 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:41:00.319592 1995776 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:41:00.319687 1995776 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:41:00.334377 1995776 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 03:41:00.334404 1995776 start.go:496] detecting cgroup driver to use...
	I1216 03:41:00.334444 1995776 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 03:41:00.334527 1995776 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 03:41:00.357896 1995776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 03:41:00.373966 1995776 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:41:00.374142 1995776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:41:00.394408 1995776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:41:00.410096 1995776 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:41:00.535762 1995776 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:41:00.653179 1995776 docker.go:234] disabling docker service ...
	I1216 03:41:00.653260 1995776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:41:00.669094 1995776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:41:00.682100 1995776 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:41:00.805779 1995776 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:41:00.926111 1995776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:41:00.939775 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:41:00.953552 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 03:41:00.963196 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 03:41:00.972463 1995776 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 03:41:00.972579 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 03:41:00.981488 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 03:41:00.990826 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 03:41:00.999935 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 03:41:01.010179 1995776 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:41:01.018579 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 03:41:01.027578 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 03:41:01.036736 1995776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 03:41:01.045920 1995776 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:41:01.053807 1995776 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:41:01.061282 1995776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:41:01.172465 1995776 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 03:41:01.343216 1995776 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 03:41:01.343301 1995776 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 03:41:01.347340 1995776 start.go:564] Will wait 60s for crictl version
	I1216 03:41:01.347418 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:01.350898 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:41:01.375745 1995776 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 03:41:01.375821 1995776 ssh_runner.go:195] Run: containerd --version
	I1216 03:41:01.396378 1995776 ssh_runner.go:195] Run: containerd --version
	I1216 03:41:01.422618 1995776 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 03:41:01.425640 1995776 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-271074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:41:01.442017 1995776 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 03:41:01.445609 1995776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:41:01.455387 1995776 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-271074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-271074 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:41:01.455507 1995776 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 03:41:01.455576 1995776 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:41:01.480123 1995776 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1216 03:41:01.480194 1995776 ssh_runner.go:195] Run: which lz4
	I1216 03:41:01.483888 1995776 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 03:41:01.487407 1995776 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 03:41:01.487440 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1216 03:41:04.967143 1995776 containerd.go:563] duration metric: took 3.483294854s to copy over tarball
	I1216 03:41:04.967239 1995776 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 03:41:07.563748 1995776 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596481828s)
	I1216 03:41:07.563835 1995776 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1216 03:41:07.563949 1995776 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:41:07.618306 1995776 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1216 03:41:07.618334 1995776 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 03:41:07.618399 1995776 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:41:07.618624 1995776 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:41:07.618723 1995776 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:41:07.618810 1995776 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:41:07.618887 1995776 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:41:07.618995 1995776 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1216 03:41:07.619120 1995776 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:41:07.619210 1995776 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:41:07.620851 1995776 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:41:07.621242 1995776 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:41:07.621491 1995776 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:41:07.621658 1995776 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:41:07.621804 1995776 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:41:07.621936 1995776 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:41:07.622067 1995776 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1216 03:41:07.622323 1995776 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:41:07.968884 1995776 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1216 03:41:07.968995 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:41:07.993100 1995776 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1216 03:41:07.993279 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1216 03:41:08.000672 1995776 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1216 03:41:08.000805 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:41:08.009642 1995776 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1216 03:41:08.009756 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:41:08.011825 1995776 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1216 03:41:08.011952 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1216 03:41:08.015274 1995776 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1216 03:41:08.015402 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:41:08.045404 1995776 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1216 03:41:08.045528 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:41:08.083029 1995776 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1216 03:41:08.083132 1995776 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:41:08.083231 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:08.152990 1995776 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1216 03:41:08.153061 1995776 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:41:08.153136 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:08.153270 1995776 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1216 03:41:08.153318 1995776 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:41:08.153357 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:08.180611 1995776 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1216 03:41:08.180663 1995776 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:41:08.180756 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:08.180873 1995776 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1216 03:41:08.180917 1995776 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1216 03:41:08.180949 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:08.181035 1995776 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1216 03:41:08.181074 1995776 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:41:08.181115 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:08.192273 1995776 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1216 03:41:08.192325 1995776 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:41:08.192405 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:08.192522 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:41:08.192625 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:41:08.192706 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:41:08.206840 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:41:08.206951 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:41:08.207220 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:41:08.402657 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:41:08.402770 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:41:08.402883 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:41:08.402979 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:41:08.433503 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:41:08.433622 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:41:08.433779 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:41:08.621886 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:41:08.622012 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:41:08.639794 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:41:08.639885 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:41:08.639968 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:41:08.639938 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:41:08.640120 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	W1216 03:41:08.790303 1995776 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 03:41:08.790522 1995776 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1216 03:41:08.790626 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:41:08.827633 1995776 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1216 03:41:08.827742 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:41:08.861876 1995776 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1216 03:41:08.861996 1995776 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1216 03:41:08.862065 1995776 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1216 03:41:08.862101 1995776 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1216 03:41:08.864182 1995776 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1216 03:41:08.864242 1995776 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1216 03:41:08.864313 1995776 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:41:08.917821 1995776 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1216 03:41:08.917907 1995776 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:41:08.917969 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:08.935289 1995776 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1216 03:41:08.935387 1995776 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1216 03:41:08.935574 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1216 03:41:08.935409 1995776 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1216 03:41:08.935684 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1216 03:41:08.935497 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:41:08.998591 1995776 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1216 03:41:08.998681 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1216 03:41:09.324960 1995776 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 03:41:09.325118 1995776 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:41:09.363994 1995776 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1216 03:41:09.364040 1995776 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:41:09.364094 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:41:09.364164 1995776 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 03:41:09.364197 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1216 03:41:11.524138 1995776 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.160013794s)
	I1216 03:41:11.524169 1995776 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1216 03:41:11.524189 1995776 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:41:11.524261 1995776 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:41:12.114345 1995776 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 03:41:12.114402 1995776 cache_images.go:94] duration metric: took 4.496053694s to LoadCachedImages
	W1216 03:41:12.114463 1995776 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0: no such file or directory
	I1216 03:41:12.114477 1995776 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 03:41:12.114581 1995776 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-271074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-271074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:41:12.114648 1995776 ssh_runner.go:195] Run: sudo crictl info
	I1216 03:41:12.152630 1995776 cni.go:84] Creating CNI manager for ""
	I1216 03:41:12.152659 1995776 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 03:41:12.152676 1995776 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:41:12.152704 1995776 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-271074 NodeName:kubernetes-upgrade-271074 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:41:12.152825 1995776 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-271074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:41:12.152902 1995776 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:41:12.170273 1995776 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:41:12.170351 1995776 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:41:12.181342 1995776 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1216 03:41:12.197046 1995776 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 03:41:12.210019 1995776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1216 03:41:12.222859 1995776 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:41:12.226633 1995776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:41:12.235941 1995776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:41:12.427997 1995776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:41:12.477080 1995776 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074 for IP: 192.168.76.2
	I1216 03:41:12.477107 1995776 certs.go:195] generating shared ca certs ...
	I1216 03:41:12.477124 1995776 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:41:12.477274 1995776 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 03:41:12.477327 1995776 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 03:41:12.477335 1995776 certs.go:257] generating profile certs ...
	I1216 03:41:12.477418 1995776 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.key
	I1216 03:41:12.477476 1995776 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/apiserver.key.cb94edfb
	I1216 03:41:12.477524 1995776 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/proxy-client.key
	I1216 03:41:12.477639 1995776 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 03:41:12.477673 1995776 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 03:41:12.477685 1995776 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 03:41:12.477714 1995776 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 03:41:12.477743 1995776 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:41:12.477766 1995776 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 03:41:12.477815 1995776 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 03:41:12.478373 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:41:12.508955 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 03:41:12.541768 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:41:12.570937 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:41:12.598253 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 03:41:12.620011 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:41:12.639473 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:41:12.658838 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:41:12.676629 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 03:41:12.694795 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 03:41:12.713180 1995776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:41:12.731175 1995776 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:41:12.744922 1995776 ssh_runner.go:195] Run: openssl version
	I1216 03:41:12.754216 1995776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 03:41:12.762809 1995776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 03:41:12.771066 1995776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 03:41:12.775032 1995776 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 03:41:12.775194 1995776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 03:41:12.817199 1995776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:41:12.824792 1995776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:41:12.832050 1995776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:41:12.839570 1995776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:41:12.843626 1995776 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:41:12.843714 1995776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:41:12.885768 1995776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:41:12.893657 1995776 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 03:41:12.901341 1995776 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 03:41:12.909443 1995776 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 03:41:12.913951 1995776 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 03:41:12.914031 1995776 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 03:41:12.956545 1995776 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:41:12.964328 1995776 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:41:12.968848 1995776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 03:41:13.011115 1995776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 03:41:13.053861 1995776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 03:41:13.099502 1995776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 03:41:13.144675 1995776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 03:41:13.190777 1995776 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 03:41:13.241685 1995776 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-271074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-271074 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:41:13.241769 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 03:41:13.241834 1995776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:41:13.274777 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:41:13.274797 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:41:13.274802 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:41:13.274806 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:41:13.274809 1995776 cri.go:89] found id: ""
	I1216 03:41:13.274860 1995776 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1216 03:41:13.295217 1995776 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T03:41:13Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1216 03:41:13.295288 1995776 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:41:13.302746 1995776 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 03:41:13.302763 1995776 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 03:41:13.302817 1995776 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 03:41:13.311810 1995776 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:41:13.312208 1995776 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-271074" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:41:13.312305 1995776 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-271074" cluster setting kubeconfig missing "kubernetes-upgrade-271074" context setting]
	I1216 03:41:13.312570 1995776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:41:13.313057 1995776 kapi.go:59] client config for kubernetes-upgrade-271074: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.key", CAFile:"/home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:41:13.313570 1995776 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 03:41:13.313582 1995776 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 03:41:13.313587 1995776 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 03:41:13.313591 1995776 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 03:41:13.313595 1995776 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 03:41:13.313851 1995776 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 03:41:13.324159 1995776 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 03:40:33.083244399 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 03:41:12.218301988 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-271074"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1216 03:41:13.324178 1995776 kubeadm.go:1161] stopping kube-system containers ...
	I1216 03:41:13.324190 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1216 03:41:13.324253 1995776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:41:13.349163 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:41:13.349182 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:41:13.349194 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:41:13.349219 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:41:13.349223 1995776 cri.go:89] found id: ""
	I1216 03:41:13.349228 1995776 cri.go:252] Stopping containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:41:13.349287 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:41:13.353706 1995776 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907
	I1216 03:41:13.387598 1995776 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 03:41:13.404178 1995776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:41:13.415511 1995776 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 16 03:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 16 03:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 16 03:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 16 03:40 /etc/kubernetes/scheduler.conf
	
	I1216 03:41:13.415729 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:41:13.427152 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:41:13.438144 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:41:13.449188 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:41:13.449253 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:41:13.458667 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:41:13.467673 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:41:13.467749 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:41:13.475517 1995776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:41:13.483457 1995776 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:41:13.538709 1995776 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:41:15.388268 1995776 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.849521483s)
	I1216 03:41:15.388357 1995776 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:41:15.629443 1995776 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:41:15.707568 1995776 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:41:15.757260 1995776 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:41:15.757349 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:16.257933 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:16.757468 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:17.258180 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:17.757466 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:18.257483 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:18.758171 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:19.257425 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:19.757471 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:20.258166 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:20.758265 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:21.257515 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:21.757486 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:22.258122 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:22.757715 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:23.257878 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:23.757996 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:24.257539 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:24.758431 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:25.258226 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:25.758060 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:26.257483 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:26.757771 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:27.258149 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:27.757692 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:28.257480 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:28.757465 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:29.257892 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:29.757666 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:30.257596 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:30.758334 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:31.258211 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:31.757496 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:32.258422 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:32.758120 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:33.257466 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:33.758213 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:34.258058 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:34.758357 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:35.257532 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:35.757533 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:36.258093 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:36.757473 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:37.258279 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:37.758436 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:38.257785 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:38.757449 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:39.258033 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:39.758172 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:40.258344 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:40.758016 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:41.257750 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:41.757497 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:42.258403 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:42.757742 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:43.257495 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:43.758480 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:44.257914 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:44.758212 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:45.284209 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:45.757556 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:46.257470 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:46.758213 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:47.257762 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:47.758219 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:48.257409 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:48.757478 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:49.257466 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:49.757522 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:50.258379 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:50.758157 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:51.258304 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:51.757535 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:52.257464 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:52.757464 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:53.259149 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:53.758192 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:54.257642 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:54.758153 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:55.258075 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:55.757966 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:56.257969 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:56.757846 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:57.257563 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:57.757443 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:58.258161 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:58.758198 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:59.257480 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:41:59.757475 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:00.260283 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:00.758294 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:01.258149 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:01.757481 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:02.257911 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:02.758064 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:03.257534 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:03.757654 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:04.257507 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:04.757553 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:05.258192 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:05.758361 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:06.257492 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:06.757508 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:07.258271 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:07.757414 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:08.258328 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:08.757414 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:09.258382 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:09.758367 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:10.257846 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:10.757884 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:11.257535 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:11.758045 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:12.257882 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:12.757520 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:13.258297 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:13.758161 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:14.257649 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:14.757554 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:15.257877 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:15.757374 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:15.757513 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:15.787014 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:15.787036 1995776 cri.go:89] found id: ""
	I1216 03:42:15.787084 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:15.787146 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:15.790813 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:15.790889 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:15.817631 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:15.817658 1995776 cri.go:89] found id: ""
	I1216 03:42:15.817667 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:15.817733 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:15.821442 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:15.821520 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:15.846910 1995776 cri.go:89] found id: ""
	I1216 03:42:15.846933 1995776 logs.go:282] 0 containers: []
	W1216 03:42:15.846943 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:15.846950 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:15.847009 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:15.873971 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:15.873993 1995776 cri.go:89] found id: ""
	I1216 03:42:15.874001 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:15.874058 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:15.877717 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:15.877792 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:15.902679 1995776 cri.go:89] found id: ""
	I1216 03:42:15.902704 1995776 logs.go:282] 0 containers: []
	W1216 03:42:15.902713 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:15.902719 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:15.902787 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:15.932782 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:15.932807 1995776 cri.go:89] found id: ""
	I1216 03:42:15.932816 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:15.932874 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:15.937299 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:15.937377 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:15.963065 1995776 cri.go:89] found id: ""
	I1216 03:42:15.963092 1995776 logs.go:282] 0 containers: []
	W1216 03:42:15.963101 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:15.963108 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:15.963174 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:15.987940 1995776 cri.go:89] found id: ""
	I1216 03:42:15.987981 1995776 logs.go:282] 0 containers: []
	W1216 03:42:15.987991 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:15.988006 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:15.988023 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:16.058326 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:16.058355 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:16.058381 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:16.092373 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:16.092407 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:16.124014 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:16.124050 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:16.182914 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:16.182954 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:16.200695 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:16.200734 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:16.237209 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:16.237245 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:16.275351 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:16.275388 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:16.312468 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:16.312503 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:18.848284 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:18.858356 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:18.858452 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:18.885558 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:18.885632 1995776 cri.go:89] found id: ""
	I1216 03:42:18.885656 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:18.885725 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:18.889661 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:18.889776 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:18.916112 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:18.916133 1995776 cri.go:89] found id: ""
	I1216 03:42:18.916142 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:18.916203 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:18.920072 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:18.920153 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:18.946498 1995776 cri.go:89] found id: ""
	I1216 03:42:18.946520 1995776 logs.go:282] 0 containers: []
	W1216 03:42:18.946529 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:18.946536 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:18.946600 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:18.974530 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:18.974595 1995776 cri.go:89] found id: ""
	I1216 03:42:18.974622 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:18.974711 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:18.978703 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:18.978829 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:19.006441 1995776 cri.go:89] found id: ""
	I1216 03:42:19.006517 1995776 logs.go:282] 0 containers: []
	W1216 03:42:19.006546 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:19.006565 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:19.006675 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:19.032000 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:19.032024 1995776 cri.go:89] found id: ""
	I1216 03:42:19.032034 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:19.032094 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:19.035812 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:19.035894 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:19.064774 1995776 cri.go:89] found id: ""
	I1216 03:42:19.064799 1995776 logs.go:282] 0 containers: []
	W1216 03:42:19.064809 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:19.064814 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:19.064875 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:19.091598 1995776 cri.go:89] found id: ""
	I1216 03:42:19.091667 1995776 logs.go:282] 0 containers: []
	W1216 03:42:19.091681 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:19.091696 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:19.091714 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:19.149708 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:19.149747 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:19.217120 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:19.217144 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:19.217159 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:19.251553 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:19.251587 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:19.283274 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:19.283315 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:19.300481 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:19.300513 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:19.335818 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:19.335854 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:19.368477 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:19.368508 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:19.404115 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:19.404149 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:21.933188 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:21.944586 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:21.944663 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:21.970559 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:21.970583 1995776 cri.go:89] found id: ""
	I1216 03:42:21.970592 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:21.970653 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:21.974510 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:21.974588 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:22.005364 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:22.005391 1995776 cri.go:89] found id: ""
	I1216 03:42:22.005401 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:22.005472 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:22.010318 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:22.010398 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:22.038922 1995776 cri.go:89] found id: ""
	I1216 03:42:22.038951 1995776 logs.go:282] 0 containers: []
	W1216 03:42:22.038961 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:22.038968 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:22.039030 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:22.064517 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:22.064541 1995776 cri.go:89] found id: ""
	I1216 03:42:22.064551 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:22.064608 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:22.068617 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:22.068694 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:22.094034 1995776 cri.go:89] found id: ""
	I1216 03:42:22.094058 1995776 logs.go:282] 0 containers: []
	W1216 03:42:22.094066 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:22.094072 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:22.094132 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:22.119133 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:22.119158 1995776 cri.go:89] found id: ""
	I1216 03:42:22.119167 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:22.119250 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:22.122755 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:22.122859 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:22.152159 1995776 cri.go:89] found id: ""
	I1216 03:42:22.152186 1995776 logs.go:282] 0 containers: []
	W1216 03:42:22.152195 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:22.152201 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:22.152264 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:22.177215 1995776 cri.go:89] found id: ""
	I1216 03:42:22.177294 1995776 logs.go:282] 0 containers: []
	W1216 03:42:22.177311 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:22.177326 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:22.177353 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:22.214442 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:22.214475 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:22.253613 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:22.253642 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:22.316123 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:22.316157 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:22.358081 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:22.358115 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:22.390432 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:22.390465 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:22.422640 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:22.422677 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:22.440314 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:22.440343 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:22.512914 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:22.512938 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:22.512953 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:25.048309 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:25.058867 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:25.058946 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:25.085709 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:25.085733 1995776 cri.go:89] found id: ""
	I1216 03:42:25.085742 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:25.085801 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:25.089726 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:25.089802 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:25.116877 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:25.116901 1995776 cri.go:89] found id: ""
	I1216 03:42:25.116911 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:25.116988 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:25.120748 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:25.120825 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:25.146245 1995776 cri.go:89] found id: ""
	I1216 03:42:25.146271 1995776 logs.go:282] 0 containers: []
	W1216 03:42:25.146280 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:25.146287 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:25.146347 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:25.172753 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:25.172775 1995776 cri.go:89] found id: ""
	I1216 03:42:25.172784 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:25.172842 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:25.176629 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:25.176703 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:25.203474 1995776 cri.go:89] found id: ""
	I1216 03:42:25.203498 1995776 logs.go:282] 0 containers: []
	W1216 03:42:25.203521 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:25.203527 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:25.203610 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:25.233648 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:25.233674 1995776 cri.go:89] found id: ""
	I1216 03:42:25.233683 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:25.233745 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:25.237636 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:25.237711 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:25.268880 1995776 cri.go:89] found id: ""
	I1216 03:42:25.268956 1995776 logs.go:282] 0 containers: []
	W1216 03:42:25.268979 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:25.268993 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:25.269072 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:25.298276 1995776 cri.go:89] found id: ""
	I1216 03:42:25.298301 1995776 logs.go:282] 0 containers: []
	W1216 03:42:25.298309 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:25.298322 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:25.298346 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:25.328558 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:25.328589 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:25.369403 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:25.369437 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:25.385954 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:25.386033 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:25.446354 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:25.446391 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:25.532104 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:25.532126 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:25.532153 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:25.580964 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:25.580998 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:25.617444 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:25.617477 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:25.653567 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:25.653600 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:28.192028 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:28.204993 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:28.205069 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:28.230192 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:28.230213 1995776 cri.go:89] found id: ""
	I1216 03:42:28.230222 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:28.230279 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:28.234219 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:28.234336 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:28.260736 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:28.260762 1995776 cri.go:89] found id: ""
	I1216 03:42:28.260771 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:28.260860 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:28.264715 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:28.264808 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:28.291699 1995776 cri.go:89] found id: ""
	I1216 03:42:28.291728 1995776 logs.go:282] 0 containers: []
	W1216 03:42:28.291738 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:28.291744 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:28.291816 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:28.320151 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:28.320221 1995776 cri.go:89] found id: ""
	I1216 03:42:28.320248 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:28.320330 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:28.324173 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:28.324256 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:28.350794 1995776 cri.go:89] found id: ""
	I1216 03:42:28.350822 1995776 logs.go:282] 0 containers: []
	W1216 03:42:28.350831 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:28.350837 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:28.350896 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:28.375873 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:28.375898 1995776 cri.go:89] found id: ""
	I1216 03:42:28.375907 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:28.375973 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:28.379497 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:28.379572 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:28.409224 1995776 cri.go:89] found id: ""
	I1216 03:42:28.409308 1995776 logs.go:282] 0 containers: []
	W1216 03:42:28.409334 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:28.409346 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:28.409426 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:28.449725 1995776 cri.go:89] found id: ""
	I1216 03:42:28.449752 1995776 logs.go:282] 0 containers: []
	W1216 03:42:28.449761 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:28.449776 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:28.449789 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:28.554630 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:28.554655 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:28.554668 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:28.588986 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:28.589019 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:28.623689 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:28.623720 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:28.653457 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:28.653486 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:28.713629 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:28.713673 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:28.732052 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:28.732093 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:28.766708 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:28.766739 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:28.801800 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:28.801833 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:31.333048 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:31.344551 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:31.344627 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:31.370079 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:31.370103 1995776 cri.go:89] found id: ""
	I1216 03:42:31.370112 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:31.370168 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:31.374196 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:31.374274 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:31.403714 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:31.403753 1995776 cri.go:89] found id: ""
	I1216 03:42:31.403762 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:31.403820 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:31.407671 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:31.407772 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:31.433630 1995776 cri.go:89] found id: ""
	I1216 03:42:31.433657 1995776 logs.go:282] 0 containers: []
	W1216 03:42:31.433667 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:31.433674 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:31.433733 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:31.463798 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:31.463832 1995776 cri.go:89] found id: ""
	I1216 03:42:31.463841 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:31.463905 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:31.470294 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:31.470368 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:31.505411 1995776 cri.go:89] found id: ""
	I1216 03:42:31.505435 1995776 logs.go:282] 0 containers: []
	W1216 03:42:31.505443 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:31.505449 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:31.505520 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:31.537801 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:31.537822 1995776 cri.go:89] found id: ""
	I1216 03:42:31.537830 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:31.537898 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:31.541571 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:31.541648 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:31.566439 1995776 cri.go:89] found id: ""
	I1216 03:42:31.566520 1995776 logs.go:282] 0 containers: []
	W1216 03:42:31.566544 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:31.566562 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:31.566685 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:31.594163 1995776 cri.go:89] found id: ""
	I1216 03:42:31.594232 1995776 logs.go:282] 0 containers: []
	W1216 03:42:31.594259 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:31.594286 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:31.594322 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:31.659388 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:31.659464 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:31.659493 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:31.694129 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:31.694163 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:31.726294 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:31.726325 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:31.765969 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:31.766014 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:31.827914 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:31.827962 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:31.844243 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:31.844272 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:31.880451 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:31.880483 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:31.916732 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:31.916766 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:34.468246 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:34.480313 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:34.480395 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:34.509110 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:34.509131 1995776 cri.go:89] found id: ""
	I1216 03:42:34.509139 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:34.509196 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:34.512812 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:34.512885 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:34.540558 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:34.540579 1995776 cri.go:89] found id: ""
	I1216 03:42:34.540587 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:34.540642 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:34.544289 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:34.544408 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:34.569739 1995776 cri.go:89] found id: ""
	I1216 03:42:34.569762 1995776 logs.go:282] 0 containers: []
	W1216 03:42:34.569773 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:34.569780 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:34.569839 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:34.597386 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:34.597410 1995776 cri.go:89] found id: ""
	I1216 03:42:34.597419 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:34.597499 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:34.601148 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:34.601222 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:34.626446 1995776 cri.go:89] found id: ""
	I1216 03:42:34.626474 1995776 logs.go:282] 0 containers: []
	W1216 03:42:34.626483 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:34.626489 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:34.626549 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:34.652837 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:34.652860 1995776 cri.go:89] found id: ""
	I1216 03:42:34.652868 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:34.652946 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:34.656715 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:34.656788 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:34.681686 1995776 cri.go:89] found id: ""
	I1216 03:42:34.681709 1995776 logs.go:282] 0 containers: []
	W1216 03:42:34.681718 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:34.681724 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:34.681786 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:34.711306 1995776 cri.go:89] found id: ""
	I1216 03:42:34.711374 1995776 logs.go:282] 0 containers: []
	W1216 03:42:34.711389 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:34.711404 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:34.711417 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:34.727975 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:34.728006 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:34.763622 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:34.763656 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:34.795197 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:34.795229 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:34.832088 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:34.832118 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:34.873165 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:34.873201 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:34.903475 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:34.903507 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:34.931717 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:34.931754 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:34.992706 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:34.992743 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:35.062740 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:37.562977 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:37.573317 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:37.573394 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:37.599750 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:37.599774 1995776 cri.go:89] found id: ""
	I1216 03:42:37.599783 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:37.599846 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:37.603674 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:37.603750 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:37.628789 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:37.628812 1995776 cri.go:89] found id: ""
	I1216 03:42:37.628821 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:37.628901 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:37.635306 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:37.635386 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:37.664968 1995776 cri.go:89] found id: ""
	I1216 03:42:37.664992 1995776 logs.go:282] 0 containers: []
	W1216 03:42:37.665001 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:37.665007 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:37.665066 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:37.691863 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:37.691885 1995776 cri.go:89] found id: ""
	I1216 03:42:37.691894 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:37.691957 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:37.695723 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:37.695800 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:37.720870 1995776 cri.go:89] found id: ""
	I1216 03:42:37.720891 1995776 logs.go:282] 0 containers: []
	W1216 03:42:37.720901 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:37.720907 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:37.720969 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:37.756634 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:37.756657 1995776 cri.go:89] found id: ""
	I1216 03:42:37.756666 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:37.756722 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:37.760441 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:37.760539 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:37.784864 1995776 cri.go:89] found id: ""
	I1216 03:42:37.784889 1995776 logs.go:282] 0 containers: []
	W1216 03:42:37.784898 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:37.784904 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:37.785008 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:37.814426 1995776 cri.go:89] found id: ""
	I1216 03:42:37.814464 1995776 logs.go:282] 0 containers: []
	W1216 03:42:37.814473 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:37.814488 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:37.814500 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:37.844011 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:37.844048 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:37.916674 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:37.916697 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:37.916710 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:37.958129 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:37.958172 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:37.991885 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:37.991917 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:38.025232 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:38.025266 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:38.064074 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:38.064111 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:38.099342 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:38.099372 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:38.158072 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:38.158109 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:40.674695 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:40.685014 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:40.685111 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:40.711812 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:40.711846 1995776 cri.go:89] found id: ""
	I1216 03:42:40.711855 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:40.711913 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:40.715689 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:40.715768 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:40.740915 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:40.740938 1995776 cri.go:89] found id: ""
	I1216 03:42:40.740947 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:40.741004 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:40.744614 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:40.744690 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:40.776292 1995776 cri.go:89] found id: ""
	I1216 03:42:40.776317 1995776 logs.go:282] 0 containers: []
	W1216 03:42:40.776326 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:40.776335 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:40.776398 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:40.801582 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:40.801607 1995776 cri.go:89] found id: ""
	I1216 03:42:40.801617 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:40.801679 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:40.805477 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:40.805552 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:40.834919 1995776 cri.go:89] found id: ""
	I1216 03:42:40.834945 1995776 logs.go:282] 0 containers: []
	W1216 03:42:40.834954 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:40.834960 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:40.835030 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:40.861401 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:40.861423 1995776 cri.go:89] found id: ""
	I1216 03:42:40.861431 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:40.861498 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:40.865228 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:40.865345 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:40.889796 1995776 cri.go:89] found id: ""
	I1216 03:42:40.889820 1995776 logs.go:282] 0 containers: []
	W1216 03:42:40.889829 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:40.889835 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:40.889897 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:40.914252 1995776 cri.go:89] found id: ""
	I1216 03:42:40.914276 1995776 logs.go:282] 0 containers: []
	W1216 03:42:40.914285 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:40.914300 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:40.914314 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:40.973232 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:40.973269 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:41.041521 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:41.041550 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:41.041563 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:41.073798 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:41.073831 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:41.090585 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:41.090667 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:41.128377 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:41.128411 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:41.163378 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:41.163416 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:41.206319 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:41.206353 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:41.248949 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:41.248991 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:43.782095 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:43.792176 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:43.792279 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:43.817448 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:43.817469 1995776 cri.go:89] found id: ""
	I1216 03:42:43.817477 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:43.817551 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:43.821130 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:43.821202 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:43.846065 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:43.846086 1995776 cri.go:89] found id: ""
	I1216 03:42:43.846094 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:43.846153 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:43.849811 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:43.849888 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:43.873697 1995776 cri.go:89] found id: ""
	I1216 03:42:43.873721 1995776 logs.go:282] 0 containers: []
	W1216 03:42:43.873731 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:43.873737 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:43.873799 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:43.902914 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:43.902939 1995776 cri.go:89] found id: ""
	I1216 03:42:43.902948 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:43.903008 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:43.906679 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:43.906753 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:43.941165 1995776 cri.go:89] found id: ""
	I1216 03:42:43.941188 1995776 logs.go:282] 0 containers: []
	W1216 03:42:43.941196 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:43.941208 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:43.941269 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:43.969683 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:43.969744 1995776 cri.go:89] found id: ""
	I1216 03:42:43.969776 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:43.969864 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:43.973593 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:43.973714 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:43.999139 1995776 cri.go:89] found id: ""
	I1216 03:42:43.999164 1995776 logs.go:282] 0 containers: []
	W1216 03:42:43.999184 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:43.999192 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:43.999252 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:44.025824 1995776 cri.go:89] found id: ""
	I1216 03:42:44.025850 1995776 logs.go:282] 0 containers: []
	W1216 03:42:44.025860 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:44.025875 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:44.025888 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:44.057100 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:44.057131 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:44.115967 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:44.116003 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:44.132591 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:44.132623 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:44.163798 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:44.163835 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:44.226757 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:44.226794 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:44.292660 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:44.292682 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:44.292695 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:44.325529 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:44.325562 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:44.357698 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:44.357725 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:46.888766 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:46.900603 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:46.900677 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:46.933595 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:46.933621 1995776 cri.go:89] found id: ""
	I1216 03:42:46.933630 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:46.933695 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:46.945011 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:46.945118 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:46.983779 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:46.983804 1995776 cri.go:89] found id: ""
	I1216 03:42:46.983813 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:46.983870 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:46.987993 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:46.988087 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:47.017615 1995776 cri.go:89] found id: ""
	I1216 03:42:47.017639 1995776 logs.go:282] 0 containers: []
	W1216 03:42:47.017648 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:47.017654 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:47.017714 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:47.049573 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:47.049594 1995776 cri.go:89] found id: ""
	I1216 03:42:47.049602 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:47.049659 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:47.053558 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:47.053637 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:47.089580 1995776 cri.go:89] found id: ""
	I1216 03:42:47.089614 1995776 logs.go:282] 0 containers: []
	W1216 03:42:47.089623 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:47.089640 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:47.089716 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:47.117587 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:47.117664 1995776 cri.go:89] found id: ""
	I1216 03:42:47.117687 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:47.117773 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:47.121688 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:47.121771 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:47.161670 1995776 cri.go:89] found id: ""
	I1216 03:42:47.161693 1995776 logs.go:282] 0 containers: []
	W1216 03:42:47.161701 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:47.161707 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:47.161768 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:47.209554 1995776 cri.go:89] found id: ""
	I1216 03:42:47.209576 1995776 logs.go:282] 0 containers: []
	W1216 03:42:47.209585 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:47.209598 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:47.209612 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:47.268466 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:47.268539 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:47.304873 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:47.304903 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:47.321269 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:47.321297 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:47.354158 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:47.354190 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:47.387537 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:47.387573 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:47.436389 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:47.436421 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:47.467294 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:47.467328 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:47.527157 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:47.527193 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:47.597785 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:50.099495 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:50.110043 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:50.110114 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:50.136016 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:50.136055 1995776 cri.go:89] found id: ""
	I1216 03:42:50.136065 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:50.136140 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:50.140116 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:50.140192 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:50.164779 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:50.164802 1995776 cri.go:89] found id: ""
	I1216 03:42:50.164822 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:50.164880 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:50.168751 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:50.168829 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:50.207569 1995776 cri.go:89] found id: ""
	I1216 03:42:50.207649 1995776 logs.go:282] 0 containers: []
	W1216 03:42:50.207674 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:50.207693 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:50.207801 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:50.243725 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:50.243748 1995776 cri.go:89] found id: ""
	I1216 03:42:50.243757 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:50.243835 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:50.248652 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:50.248735 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:50.278487 1995776 cri.go:89] found id: ""
	I1216 03:42:50.278573 1995776 logs.go:282] 0 containers: []
	W1216 03:42:50.278598 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:50.278618 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:50.278715 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:50.312713 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:50.312737 1995776 cri.go:89] found id: ""
	I1216 03:42:50.312757 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:50.312836 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:50.316712 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:50.316785 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:50.345689 1995776 cri.go:89] found id: ""
	I1216 03:42:50.345713 1995776 logs.go:282] 0 containers: []
	W1216 03:42:50.345722 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:50.345728 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:50.345797 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:50.370313 1995776 cri.go:89] found id: ""
	I1216 03:42:50.370337 1995776 logs.go:282] 0 containers: []
	W1216 03:42:50.370345 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:50.370359 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:50.370378 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:50.405184 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:50.405223 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:50.437209 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:50.437242 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:50.487123 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:50.487153 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:50.516477 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:50.516511 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:50.579928 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:50.579964 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:50.640071 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:50.640141 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:50.640171 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:50.681334 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:50.681365 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:50.726803 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:50.726831 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:53.244707 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:53.255000 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:53.255104 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:53.280174 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:53.280198 1995776 cri.go:89] found id: ""
	I1216 03:42:53.280207 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:53.280267 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:53.284004 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:53.284076 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:53.314699 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:53.314725 1995776 cri.go:89] found id: ""
	I1216 03:42:53.314734 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:53.314808 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:53.319008 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:53.319129 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:53.345595 1995776 cri.go:89] found id: ""
	I1216 03:42:53.345623 1995776 logs.go:282] 0 containers: []
	W1216 03:42:53.345641 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:53.345648 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:53.345722 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:53.371784 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:53.371807 1995776 cri.go:89] found id: ""
	I1216 03:42:53.371817 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:53.371875 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:53.375530 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:53.375607 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:53.401841 1995776 cri.go:89] found id: ""
	I1216 03:42:53.401868 1995776 logs.go:282] 0 containers: []
	W1216 03:42:53.401876 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:53.401883 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:53.401951 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:53.426610 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:53.426634 1995776 cri.go:89] found id: ""
	I1216 03:42:53.426643 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:53.426710 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:53.430368 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:53.430452 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:53.457648 1995776 cri.go:89] found id: ""
	I1216 03:42:53.457683 1995776 logs.go:282] 0 containers: []
	W1216 03:42:53.457692 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:53.457699 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:53.457769 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:53.482359 1995776 cri.go:89] found id: ""
	I1216 03:42:53.482393 1995776 logs.go:282] 0 containers: []
	W1216 03:42:53.482402 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:53.482417 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:53.482430 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:53.498318 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:53.498348 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:53.531502 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:53.531536 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:53.564435 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:53.564466 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:53.595366 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:53.595404 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:53.625989 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:53.626026 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:53.684034 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:53.684069 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:53.747858 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:53.747881 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:53.747895 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:53.786096 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:53.786132 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:56.330874 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:56.341245 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:56.341316 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:56.371214 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:56.371234 1995776 cri.go:89] found id: ""
	I1216 03:42:56.371256 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:56.371315 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:56.375129 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:56.375206 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:56.401182 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:56.401213 1995776 cri.go:89] found id: ""
	I1216 03:42:56.401222 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:56.401281 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:56.405101 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:56.405179 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:56.429602 1995776 cri.go:89] found id: ""
	I1216 03:42:56.429628 1995776 logs.go:282] 0 containers: []
	W1216 03:42:56.429637 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:56.429644 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:56.429702 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:56.458925 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:56.458950 1995776 cri.go:89] found id: ""
	I1216 03:42:56.458959 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:56.459017 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:56.462641 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:56.462715 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:56.488151 1995776 cri.go:89] found id: ""
	I1216 03:42:56.488175 1995776 logs.go:282] 0 containers: []
	W1216 03:42:56.488185 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:56.488192 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:56.488253 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:56.513958 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:56.513982 1995776 cri.go:89] found id: ""
	I1216 03:42:56.513991 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:56.514060 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:56.517824 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:56.517924 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:56.544365 1995776 cri.go:89] found id: ""
	I1216 03:42:56.544389 1995776 logs.go:282] 0 containers: []
	W1216 03:42:56.544398 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:56.544404 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:56.544468 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:56.569871 1995776 cri.go:89] found id: ""
	I1216 03:42:56.569896 1995776 logs.go:282] 0 containers: []
	W1216 03:42:56.569917 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:56.569931 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:56.569944 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:56.603698 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:56.603729 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:42:56.633561 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:56.633591 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:56.691320 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:56.691355 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:56.708140 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:56.708172 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:56.739562 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:56.739598 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:56.784037 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:56.784070 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:56.815458 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:56.815493 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:56.882042 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:56.882065 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:56.882080 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:59.416116 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:42:59.426611 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:42:59.426686 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:42:59.453900 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:59.453924 1995776 cri.go:89] found id: ""
	I1216 03:42:59.453933 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:42:59.453992 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:59.457772 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:42:59.457849 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:42:59.483250 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:59.483274 1995776 cri.go:89] found id: ""
	I1216 03:42:59.483282 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:42:59.483342 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:59.486987 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:42:59.487085 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:42:59.517196 1995776 cri.go:89] found id: ""
	I1216 03:42:59.517220 1995776 logs.go:282] 0 containers: []
	W1216 03:42:59.517230 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:42:59.517236 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:42:59.517299 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:42:59.547912 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:59.547941 1995776 cri.go:89] found id: ""
	I1216 03:42:59.547950 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:42:59.548008 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:59.551683 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:42:59.551761 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:42:59.576336 1995776 cri.go:89] found id: ""
	I1216 03:42:59.576360 1995776 logs.go:282] 0 containers: []
	W1216 03:42:59.576369 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:42:59.576375 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:42:59.576435 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:42:59.600999 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:59.601021 1995776 cri.go:89] found id: ""
	I1216 03:42:59.601031 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:42:59.601089 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:42:59.604803 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:42:59.604880 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:42:59.629419 1995776 cri.go:89] found id: ""
	I1216 03:42:59.629506 1995776 logs.go:282] 0 containers: []
	W1216 03:42:59.629529 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:42:59.629568 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:42:59.629668 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:42:59.655167 1995776 cri.go:89] found id: ""
	I1216 03:42:59.655190 1995776 logs.go:282] 0 containers: []
	W1216 03:42:59.655199 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:42:59.655212 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:42:59.655230 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:42:59.713462 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:42:59.713498 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:42:59.747673 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:42:59.747706 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:42:59.781922 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:42:59.781953 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:42:59.812750 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:42:59.812790 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:42:59.828959 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:42:59.828993 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:42:59.893344 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:42:59.893367 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:42:59.893382 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:42:59.927979 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:42:59.928012 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:42:59.986219 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:42:59.986258 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:02.541128 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:02.552211 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:02.552285 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:02.581192 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:02.581212 1995776 cri.go:89] found id: ""
	I1216 03:43:02.581220 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:02.581275 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:02.585388 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:02.585493 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:02.616880 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:02.616900 1995776 cri.go:89] found id: ""
	I1216 03:43:02.616909 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:02.616968 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:02.620927 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:02.621046 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:02.646235 1995776 cri.go:89] found id: ""
	I1216 03:43:02.646263 1995776 logs.go:282] 0 containers: []
	W1216 03:43:02.646271 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:02.646278 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:02.646343 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:02.672658 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:02.672678 1995776 cri.go:89] found id: ""
	I1216 03:43:02.672686 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:02.672743 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:02.676555 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:02.676636 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:02.701096 1995776 cri.go:89] found id: ""
	I1216 03:43:02.701177 1995776 logs.go:282] 0 containers: []
	W1216 03:43:02.701201 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:02.701214 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:02.701288 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:02.726068 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:02.726089 1995776 cri.go:89] found id: ""
	I1216 03:43:02.726098 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:02.726157 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:02.729984 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:02.730059 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:02.764619 1995776 cri.go:89] found id: ""
	I1216 03:43:02.764644 1995776 logs.go:282] 0 containers: []
	W1216 03:43:02.764653 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:02.764660 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:02.764737 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:02.792897 1995776 cri.go:89] found id: ""
	I1216 03:43:02.792921 1995776 logs.go:282] 0 containers: []
	W1216 03:43:02.792929 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:02.792943 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:02.792955 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:02.822252 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:02.822284 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:02.855513 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:02.855548 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:02.916305 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:02.916341 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:02.933181 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:02.933211 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:03.028303 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:03.028380 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:03.028401 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:03.067553 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:03.067588 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:03.109121 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:03.109155 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:03.144903 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:03.144937 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:05.675942 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:05.686422 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:05.686500 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:05.717682 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:05.717703 1995776 cri.go:89] found id: ""
	I1216 03:43:05.717712 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:05.717785 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:05.722086 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:05.722159 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:05.761839 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:05.761858 1995776 cri.go:89] found id: ""
	I1216 03:43:05.761866 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:05.761921 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:05.767304 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:05.767381 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:05.801371 1995776 cri.go:89] found id: ""
	I1216 03:43:05.801400 1995776 logs.go:282] 0 containers: []
	W1216 03:43:05.801409 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:05.801416 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:05.801476 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:05.833347 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:05.833371 1995776 cri.go:89] found id: ""
	I1216 03:43:05.833380 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:05.833428 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:05.837823 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:05.837914 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:05.866241 1995776 cri.go:89] found id: ""
	I1216 03:43:05.866269 1995776 logs.go:282] 0 containers: []
	W1216 03:43:05.866278 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:05.866285 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:05.866345 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:05.898461 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:05.898489 1995776 cri.go:89] found id: ""
	I1216 03:43:05.898498 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:05.898558 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:05.903354 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:05.903436 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:05.931807 1995776 cri.go:89] found id: ""
	I1216 03:43:05.931837 1995776 logs.go:282] 0 containers: []
	W1216 03:43:05.931847 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:05.931853 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:05.931917 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:05.985572 1995776 cri.go:89] found id: ""
	I1216 03:43:05.985602 1995776 logs.go:282] 0 containers: []
	W1216 03:43:05.985610 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:05.985623 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:05.985635 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:06.067749 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:06.067794 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:06.111121 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:06.111161 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:06.172297 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:06.172339 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:06.192890 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:06.192920 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:06.277856 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:06.277876 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:06.277889 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:06.311626 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:06.311664 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:06.346906 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:06.346936 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:06.382470 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:06.382502 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:08.915945 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:08.926387 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:08.926458 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:08.964260 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:08.964285 1995776 cri.go:89] found id: ""
	I1216 03:43:08.964295 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:08.964353 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:08.969419 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:08.969493 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:09.005931 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:09.005953 1995776 cri.go:89] found id: ""
	I1216 03:43:09.005961 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:09.006024 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:09.010939 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:09.011016 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:09.048470 1995776 cri.go:89] found id: ""
	I1216 03:43:09.048498 1995776 logs.go:282] 0 containers: []
	W1216 03:43:09.048507 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:09.048513 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:09.048576 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:09.074093 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:09.074118 1995776 cri.go:89] found id: ""
	I1216 03:43:09.074126 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:09.074187 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:09.077824 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:09.077900 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:09.102396 1995776 cri.go:89] found id: ""
	I1216 03:43:09.102425 1995776 logs.go:282] 0 containers: []
	W1216 03:43:09.102435 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:09.102442 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:09.102505 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:09.127658 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:09.127695 1995776 cri.go:89] found id: ""
	I1216 03:43:09.127703 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:09.127764 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:09.131395 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:09.131523 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:09.158472 1995776 cri.go:89] found id: ""
	I1216 03:43:09.158500 1995776 logs.go:282] 0 containers: []
	W1216 03:43:09.158509 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:09.158515 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:09.158584 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:09.183906 1995776 cri.go:89] found id: ""
	I1216 03:43:09.183981 1995776 logs.go:282] 0 containers: []
	W1216 03:43:09.183991 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:09.184012 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:09.184027 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:09.200414 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:09.200443 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:09.264989 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:09.265013 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:09.265027 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:09.295935 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:09.295965 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:09.325492 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:09.325594 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:09.385907 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:09.385942 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:09.419494 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:09.419530 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:09.455607 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:09.455640 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:09.491220 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:09.491258 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:12.023688 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:12.034506 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:12.034582 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:12.060600 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:12.060623 1995776 cri.go:89] found id: ""
	I1216 03:43:12.060632 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:12.060687 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:12.064566 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:12.064643 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:12.090332 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:12.090358 1995776 cri.go:89] found id: ""
	I1216 03:43:12.090368 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:12.090425 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:12.094093 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:12.094172 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:12.121362 1995776 cri.go:89] found id: ""
	I1216 03:43:12.121386 1995776 logs.go:282] 0 containers: []
	W1216 03:43:12.121395 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:12.121401 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:12.121458 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:12.148678 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:12.148702 1995776 cri.go:89] found id: ""
	I1216 03:43:12.148711 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:12.148800 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:12.152546 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:12.152618 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:12.178700 1995776 cri.go:89] found id: ""
	I1216 03:43:12.178724 1995776 logs.go:282] 0 containers: []
	W1216 03:43:12.178733 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:12.178739 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:12.178800 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:12.209000 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:12.209023 1995776 cri.go:89] found id: ""
	I1216 03:43:12.209037 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:12.209094 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:12.212981 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:12.213055 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:12.238972 1995776 cri.go:89] found id: ""
	I1216 03:43:12.238999 1995776 logs.go:282] 0 containers: []
	W1216 03:43:12.239009 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:12.239015 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:12.239104 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:12.268533 1995776 cri.go:89] found id: ""
	I1216 03:43:12.268565 1995776 logs.go:282] 0 containers: []
	W1216 03:43:12.268574 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:12.268608 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:12.268625 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:12.300961 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:12.300989 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:12.317325 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:12.317355 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:12.383084 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:12.383109 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:12.383124 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:12.415490 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:12.415521 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:12.450295 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:12.450329 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:12.481560 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:12.481593 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:12.511371 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:12.511402 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:12.569711 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:12.569748 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:15.104471 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:15.116055 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:15.116146 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:15.149135 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:15.149167 1995776 cri.go:89] found id: ""
	I1216 03:43:15.149176 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:15.149236 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:15.153443 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:15.153519 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:15.180187 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:15.180211 1995776 cri.go:89] found id: ""
	I1216 03:43:15.180220 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:15.180279 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:15.184360 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:15.184448 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:15.213595 1995776 cri.go:89] found id: ""
	I1216 03:43:15.213620 1995776 logs.go:282] 0 containers: []
	W1216 03:43:15.213629 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:15.213637 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:15.213699 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:15.240367 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:15.240392 1995776 cri.go:89] found id: ""
	I1216 03:43:15.240401 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:15.240462 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:15.244430 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:15.244509 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:15.270829 1995776 cri.go:89] found id: ""
	I1216 03:43:15.270856 1995776 logs.go:282] 0 containers: []
	W1216 03:43:15.270866 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:15.270872 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:15.270938 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:15.301737 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:15.301761 1995776 cri.go:89] found id: ""
	I1216 03:43:15.301769 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:15.301827 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:15.305655 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:15.305732 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:15.331934 1995776 cri.go:89] found id: ""
	I1216 03:43:15.331961 1995776 logs.go:282] 0 containers: []
	W1216 03:43:15.331971 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:15.331978 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:15.332044 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:15.360049 1995776 cri.go:89] found id: ""
	I1216 03:43:15.360079 1995776 logs.go:282] 0 containers: []
	W1216 03:43:15.360088 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:15.360102 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:15.360118 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:15.428211 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:15.428230 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:15.428245 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:15.460375 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:15.460409 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:15.493947 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:15.493979 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:15.528879 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:15.528915 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:15.559885 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:15.559928 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:15.603178 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:15.603210 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:15.619714 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:15.619745 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:15.653129 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:15.653163 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:18.215206 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:18.227070 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:18.227140 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:18.258823 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:18.258843 1995776 cri.go:89] found id: ""
	I1216 03:43:18.258851 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:18.258904 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:18.263238 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:18.263308 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:18.296159 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:18.296179 1995776 cri.go:89] found id: ""
	I1216 03:43:18.296188 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:18.296244 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:18.300727 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:18.300798 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:18.332949 1995776 cri.go:89] found id: ""
	I1216 03:43:18.332971 1995776 logs.go:282] 0 containers: []
	W1216 03:43:18.332979 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:18.332986 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:18.333052 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:18.372729 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:18.372749 1995776 cri.go:89] found id: ""
	I1216 03:43:18.372757 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:18.372814 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:18.377265 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:18.377332 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:18.411690 1995776 cri.go:89] found id: ""
	I1216 03:43:18.411715 1995776 logs.go:282] 0 containers: []
	W1216 03:43:18.411724 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:18.411730 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:18.411791 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:18.445795 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:18.445826 1995776 cri.go:89] found id: ""
	I1216 03:43:18.445835 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:18.445912 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:18.450212 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:18.450292 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:18.485630 1995776 cri.go:89] found id: ""
	I1216 03:43:18.485658 1995776 logs.go:282] 0 containers: []
	W1216 03:43:18.485667 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:18.485673 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:18.485733 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:18.514600 1995776 cri.go:89] found id: ""
	I1216 03:43:18.514626 1995776 logs.go:282] 0 containers: []
	W1216 03:43:18.514635 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:18.514650 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:18.514661 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:18.546752 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:18.546783 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:18.623320 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:18.623346 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:18.623359 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:18.655128 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:18.655163 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:18.686389 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:18.686476 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:18.753602 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:18.753685 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:18.770447 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:18.770527 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:18.805807 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:18.805847 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:18.837842 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:18.837878 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:21.373560 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:21.383719 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:21.383791 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:21.419229 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:21.419254 1995776 cri.go:89] found id: ""
	I1216 03:43:21.419263 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:21.419328 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:21.423670 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:21.423743 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:21.456661 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:21.456682 1995776 cri.go:89] found id: ""
	I1216 03:43:21.456691 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:21.456749 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:21.460640 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:21.460711 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:21.493822 1995776 cri.go:89] found id: ""
	I1216 03:43:21.493851 1995776 logs.go:282] 0 containers: []
	W1216 03:43:21.493860 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:21.493867 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:21.493931 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:21.527415 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:21.527436 1995776 cri.go:89] found id: ""
	I1216 03:43:21.527445 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:21.527503 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:21.531638 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:21.531710 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:21.565746 1995776 cri.go:89] found id: ""
	I1216 03:43:21.565771 1995776 logs.go:282] 0 containers: []
	W1216 03:43:21.565780 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:21.565786 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:21.565845 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:21.600014 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:21.600037 1995776 cri.go:89] found id: ""
	I1216 03:43:21.600046 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:21.600104 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:21.604700 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:21.604784 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:21.647471 1995776 cri.go:89] found id: ""
	I1216 03:43:21.647499 1995776 logs.go:282] 0 containers: []
	W1216 03:43:21.647508 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:21.647515 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:21.647583 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:21.684673 1995776 cri.go:89] found id: ""
	I1216 03:43:21.684699 1995776 logs.go:282] 0 containers: []
	W1216 03:43:21.684708 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:21.684722 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:21.684733 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:21.784544 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:21.784577 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:21.805812 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:21.805845 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:21.845146 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:21.845180 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:21.886529 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:21.886566 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:21.926410 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:21.926439 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:22.024839 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:22.024862 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:22.024876 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:22.076250 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:22.076287 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:22.110290 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:22.110323 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:24.644955 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:24.656498 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:24.656568 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:24.698423 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:24.698443 1995776 cri.go:89] found id: ""
	I1216 03:43:24.698451 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:24.698511 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:24.702648 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:24.702717 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:24.779330 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:24.779350 1995776 cri.go:89] found id: ""
	I1216 03:43:24.779358 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:24.779411 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:24.783667 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:24.783736 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:24.811439 1995776 cri.go:89] found id: ""
	I1216 03:43:24.811461 1995776 logs.go:282] 0 containers: []
	W1216 03:43:24.811469 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:24.811475 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:24.811537 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:24.850680 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:24.850753 1995776 cri.go:89] found id: ""
	I1216 03:43:24.850776 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:24.850846 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:24.854945 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:24.855011 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:24.885254 1995776 cri.go:89] found id: ""
	I1216 03:43:24.885328 1995776 logs.go:282] 0 containers: []
	W1216 03:43:24.885352 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:24.885370 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:24.885452 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:24.921447 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:24.921524 1995776 cri.go:89] found id: ""
	I1216 03:43:24.921547 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:24.921629 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:24.925929 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:24.926266 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:24.958441 1995776 cri.go:89] found id: ""
	I1216 03:43:24.958508 1995776 logs.go:282] 0 containers: []
	W1216 03:43:24.958533 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:24.958562 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:24.958638 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:24.991789 1995776 cri.go:89] found id: ""
	I1216 03:43:24.991864 1995776 logs.go:282] 0 containers: []
	W1216 03:43:24.991888 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:24.991945 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:24.991979 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:25.046506 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:25.046586 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:25.091874 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:25.091915 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:25.139266 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:25.139302 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:25.187663 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:25.187704 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:25.227244 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:25.227274 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:25.247516 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:25.247543 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:25.281726 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:25.281757 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:25.352049 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:25.352084 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:25.455758 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:27.956682 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:27.966883 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:27.966952 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:27.994539 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:27.994559 1995776 cri.go:89] found id: ""
	I1216 03:43:27.994568 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:27.994628 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:27.998491 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:27.998563 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:28.027120 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:28.027147 1995776 cri.go:89] found id: ""
	I1216 03:43:28.027156 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:28.027220 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:28.031253 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:28.031413 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:28.062731 1995776 cri.go:89] found id: ""
	I1216 03:43:28.062754 1995776 logs.go:282] 0 containers: []
	W1216 03:43:28.062762 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:28.062769 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:28.062832 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:28.088865 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:28.088935 1995776 cri.go:89] found id: ""
	I1216 03:43:28.088950 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:28.089011 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:28.092819 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:28.092896 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:28.119195 1995776 cri.go:89] found id: ""
	I1216 03:43:28.119223 1995776 logs.go:282] 0 containers: []
	W1216 03:43:28.119232 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:28.119238 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:28.119300 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:28.145973 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:28.146005 1995776 cri.go:89] found id: ""
	I1216 03:43:28.146013 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:28.146074 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:28.150281 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:28.150362 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:28.175414 1995776 cri.go:89] found id: ""
	I1216 03:43:28.175441 1995776 logs.go:282] 0 containers: []
	W1216 03:43:28.175450 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:28.175457 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:28.175518 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:28.201674 1995776 cri.go:89] found id: ""
	I1216 03:43:28.201698 1995776 logs.go:282] 0 containers: []
	W1216 03:43:28.201706 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:28.201720 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:28.201732 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:28.281678 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:28.281698 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:28.281710 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:28.318049 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:28.318398 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:28.370311 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:28.370385 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:28.412373 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:28.412448 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:28.453732 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:28.453820 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:28.553272 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:28.553350 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:28.597731 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:28.597837 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:28.669435 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:28.669472 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:31.186374 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:31.196956 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:31.197040 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:31.225694 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:31.225716 1995776 cri.go:89] found id: ""
	I1216 03:43:31.225725 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:31.225783 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:31.229508 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:31.229583 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:31.256430 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:31.256456 1995776 cri.go:89] found id: ""
	I1216 03:43:31.256465 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:31.256524 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:31.261011 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:31.261084 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:31.289040 1995776 cri.go:89] found id: ""
	I1216 03:43:31.289067 1995776 logs.go:282] 0 containers: []
	W1216 03:43:31.289077 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:31.289083 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:31.289142 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:31.317133 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:31.317156 1995776 cri.go:89] found id: ""
	I1216 03:43:31.317165 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:31.317225 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:31.320975 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:31.321056 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:31.350050 1995776 cri.go:89] found id: ""
	I1216 03:43:31.350081 1995776 logs.go:282] 0 containers: []
	W1216 03:43:31.350097 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:31.350104 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:31.350165 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:31.376447 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:31.376468 1995776 cri.go:89] found id: ""
	I1216 03:43:31.376477 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:31.376554 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:31.380401 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:31.380477 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:31.407632 1995776 cri.go:89] found id: ""
	I1216 03:43:31.407670 1995776 logs.go:282] 0 containers: []
	W1216 03:43:31.407679 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:31.407686 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:31.407756 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:31.432208 1995776 cri.go:89] found id: ""
	I1216 03:43:31.432243 1995776 logs.go:282] 0 containers: []
	W1216 03:43:31.432253 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:31.432267 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:31.432278 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:31.499797 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:31.499836 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:31.537345 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:31.537377 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:31.571188 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:31.571223 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:31.604521 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:31.604551 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:31.635149 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:31.635183 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:31.663752 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:31.663782 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:31.679868 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:31.679902 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:31.744545 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:31.744567 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:31.744581 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:34.280278 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:34.291243 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:34.291315 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:34.326886 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:34.326910 1995776 cri.go:89] found id: ""
	I1216 03:43:34.326919 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:34.326972 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:34.331490 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:34.331560 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:34.374600 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:34.374624 1995776 cri.go:89] found id: ""
	I1216 03:43:34.374632 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:34.374685 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:34.382779 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:34.382859 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:34.420810 1995776 cri.go:89] found id: ""
	I1216 03:43:34.420838 1995776 logs.go:282] 0 containers: []
	W1216 03:43:34.420846 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:34.420852 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:34.420918 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:34.484835 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:34.484857 1995776 cri.go:89] found id: ""
	I1216 03:43:34.484866 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:34.484927 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:34.497598 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:34.497690 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:34.557182 1995776 cri.go:89] found id: ""
	I1216 03:43:34.557218 1995776 logs.go:282] 0 containers: []
	W1216 03:43:34.557227 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:34.557233 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:34.557305 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:34.601892 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:34.601914 1995776 cri.go:89] found id: ""
	I1216 03:43:34.601923 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:34.601986 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:34.606785 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:34.606877 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:34.653656 1995776 cri.go:89] found id: ""
	I1216 03:43:34.653682 1995776 logs.go:282] 0 containers: []
	W1216 03:43:34.653691 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:34.653698 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:34.653762 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:34.683341 1995776 cri.go:89] found id: ""
	I1216 03:43:34.683367 1995776 logs.go:282] 0 containers: []
	W1216 03:43:34.683376 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:34.683393 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:34.683414 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:34.790848 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:34.790882 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:34.790896 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:34.853976 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:34.854008 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:34.900744 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:34.900775 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:34.966934 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:34.966976 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:35.019730 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:35.019810 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:35.107346 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:35.107426 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:35.127503 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:35.127533 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:35.162807 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:35.162867 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:37.741173 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:37.751189 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:37.751282 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:37.776168 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:37.776188 1995776 cri.go:89] found id: ""
	I1216 03:43:37.776197 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:37.776253 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:37.780212 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:37.780328 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:37.804976 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:37.805003 1995776 cri.go:89] found id: ""
	I1216 03:43:37.805011 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:37.805090 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:37.809092 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:37.809166 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:37.837085 1995776 cri.go:89] found id: ""
	I1216 03:43:37.837161 1995776 logs.go:282] 0 containers: []
	W1216 03:43:37.837177 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:37.837184 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:37.837243 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:37.865465 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:37.865488 1995776 cri.go:89] found id: ""
	I1216 03:43:37.865497 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:37.865565 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:37.869240 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:37.869314 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:37.894817 1995776 cri.go:89] found id: ""
	I1216 03:43:37.894843 1995776 logs.go:282] 0 containers: []
	W1216 03:43:37.894851 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:37.894858 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:37.894918 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:37.921940 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:37.921965 1995776 cri.go:89] found id: ""
	I1216 03:43:37.921973 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:37.922036 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:37.925881 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:37.925958 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:37.952997 1995776 cri.go:89] found id: ""
	I1216 03:43:37.953067 1995776 logs.go:282] 0 containers: []
	W1216 03:43:37.953082 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:37.953089 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:37.953152 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:37.978934 1995776 cri.go:89] found id: ""
	I1216 03:43:37.978959 1995776 logs.go:282] 0 containers: []
	W1216 03:43:37.978968 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:37.978982 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:37.978993 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:38.037283 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:38.037320 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:38.054605 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:38.054680 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:38.089172 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:38.089213 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:38.121282 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:38.121315 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:38.149456 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:38.149487 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:38.235737 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:38.235760 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:38.235773 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:38.274499 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:38.274533 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:38.316211 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:38.316256 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:40.848091 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:40.858875 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:40.858948 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:40.888535 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:40.888562 1995776 cri.go:89] found id: ""
	I1216 03:43:40.888571 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:40.888627 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:40.893020 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:40.893095 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:40.925240 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:40.925267 1995776 cri.go:89] found id: ""
	I1216 03:43:40.925276 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:40.925330 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:40.929521 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:40.929596 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:40.960340 1995776 cri.go:89] found id: ""
	I1216 03:43:40.960369 1995776 logs.go:282] 0 containers: []
	W1216 03:43:40.960378 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:40.960384 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:40.960443 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:41.006994 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:41.007024 1995776 cri.go:89] found id: ""
	I1216 03:43:41.007033 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:41.007126 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:41.011936 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:41.012013 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:41.043162 1995776 cri.go:89] found id: ""
	I1216 03:43:41.043188 1995776 logs.go:282] 0 containers: []
	W1216 03:43:41.043201 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:41.043207 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:41.043271 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:41.073699 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:41.073723 1995776 cri.go:89] found id: ""
	I1216 03:43:41.073732 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:41.073792 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:41.077448 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:41.077523 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:41.109283 1995776 cri.go:89] found id: ""
	I1216 03:43:41.109309 1995776 logs.go:282] 0 containers: []
	W1216 03:43:41.109319 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:41.109325 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:41.109383 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:41.138708 1995776 cri.go:89] found id: ""
	I1216 03:43:41.138790 1995776 logs.go:282] 0 containers: []
	W1216 03:43:41.138814 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:41.138858 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:41.138889 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:41.169046 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:41.169083 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:41.236007 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:41.236400 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:41.316307 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:41.316384 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:41.316408 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:41.354574 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:41.354608 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:41.390641 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:41.390680 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:41.421432 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:41.421461 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:41.437673 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:41.437706 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:41.475181 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:41.475216 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:44.007538 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:44.018270 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:44.018350 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:44.044754 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:44.044778 1995776 cri.go:89] found id: ""
	I1216 03:43:44.044786 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:44.044846 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:44.048628 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:44.048743 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:44.077369 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:44.077393 1995776 cri.go:89] found id: ""
	I1216 03:43:44.077401 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:44.077458 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:44.081308 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:44.081395 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:44.108193 1995776 cri.go:89] found id: ""
	I1216 03:43:44.108220 1995776 logs.go:282] 0 containers: []
	W1216 03:43:44.108230 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:44.108236 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:44.108300 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:44.133586 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:44.133609 1995776 cri.go:89] found id: ""
	I1216 03:43:44.133635 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:44.133694 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:44.137550 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:44.137667 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:44.165256 1995776 cri.go:89] found id: ""
	I1216 03:43:44.165284 1995776 logs.go:282] 0 containers: []
	W1216 03:43:44.165294 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:44.165300 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:44.165387 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:44.194300 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:44.194371 1995776 cri.go:89] found id: ""
	I1216 03:43:44.194393 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:44.194481 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:44.198736 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:44.198835 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:44.226165 1995776 cri.go:89] found id: ""
	I1216 03:43:44.226191 1995776 logs.go:282] 0 containers: []
	W1216 03:43:44.226200 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:44.226206 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:44.226281 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:44.263321 1995776 cri.go:89] found id: ""
	I1216 03:43:44.263389 1995776 logs.go:282] 0 containers: []
	W1216 03:43:44.263414 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:44.263436 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:44.263449 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:44.295546 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:44.295578 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:44.324424 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:44.324453 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:44.383491 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:44.383529 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:44.418917 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:44.418956 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:44.458084 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:44.458114 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:44.494447 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:44.494483 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:44.525061 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:44.525111 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:44.541803 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:44.541835 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:44.611141 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:47.112085 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:47.122278 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:47.122349 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:47.148778 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:47.148803 1995776 cri.go:89] found id: ""
	I1216 03:43:47.148818 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:47.148875 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:47.152601 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:47.152674 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:47.182104 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:47.182129 1995776 cri.go:89] found id: ""
	I1216 03:43:47.182139 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:47.182201 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:47.188650 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:47.188722 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:47.228895 1995776 cri.go:89] found id: ""
	I1216 03:43:47.228926 1995776 logs.go:282] 0 containers: []
	W1216 03:43:47.228946 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:47.228953 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:47.229026 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:47.253996 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:47.254021 1995776 cri.go:89] found id: ""
	I1216 03:43:47.254030 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:47.254091 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:47.258875 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:47.258950 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:47.284784 1995776 cri.go:89] found id: ""
	I1216 03:43:47.284811 1995776 logs.go:282] 0 containers: []
	W1216 03:43:47.284820 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:47.284826 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:47.284889 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:47.316566 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:47.316590 1995776 cri.go:89] found id: ""
	I1216 03:43:47.316600 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:47.316677 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:47.320553 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:47.320640 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:47.346305 1995776 cri.go:89] found id: ""
	I1216 03:43:47.346333 1995776 logs.go:282] 0 containers: []
	W1216 03:43:47.346342 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:47.346348 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:47.346408 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:47.373919 1995776 cri.go:89] found id: ""
	I1216 03:43:47.373948 1995776 logs.go:282] 0 containers: []
	W1216 03:43:47.373958 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:47.373971 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:47.373987 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:47.394713 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:47.394744 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:47.429011 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:47.429044 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:47.466269 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:47.466307 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:47.497465 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:47.497503 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:47.526074 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:47.526103 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:47.585060 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:47.585099 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:47.650519 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:47.650541 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:47.650553 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:47.684036 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:47.684069 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:50.216020 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:50.230907 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:50.230982 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:50.261607 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:50.261632 1995776 cri.go:89] found id: ""
	I1216 03:43:50.261641 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:50.261698 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:50.265402 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:50.265479 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:50.293607 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:50.293684 1995776 cri.go:89] found id: ""
	I1216 03:43:50.293708 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:50.293790 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:50.297640 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:50.297716 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:50.323624 1995776 cri.go:89] found id: ""
	I1216 03:43:50.323651 1995776 logs.go:282] 0 containers: []
	W1216 03:43:50.323660 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:50.323666 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:50.323728 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:50.352500 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:50.352527 1995776 cri.go:89] found id: ""
	I1216 03:43:50.352537 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:50.352603 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:50.356371 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:50.356446 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:50.380621 1995776 cri.go:89] found id: ""
	I1216 03:43:50.380645 1995776 logs.go:282] 0 containers: []
	W1216 03:43:50.380653 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:50.380659 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:50.380716 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:50.404624 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:50.404648 1995776 cri.go:89] found id: ""
	I1216 03:43:50.404657 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:50.404718 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:50.408569 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:50.408642 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:50.433542 1995776 cri.go:89] found id: ""
	I1216 03:43:50.433569 1995776 logs.go:282] 0 containers: []
	W1216 03:43:50.433578 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:50.433584 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:50.433645 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:50.458856 1995776 cri.go:89] found id: ""
	I1216 03:43:50.458887 1995776 logs.go:282] 0 containers: []
	W1216 03:43:50.458896 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:50.458914 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:50.458925 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:50.516901 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:50.516942 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:50.552927 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:50.552961 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:50.584353 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:50.584385 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:50.614465 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:50.614500 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:50.630943 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:50.630972 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:50.705719 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:50.705752 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:50.705767 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:50.740519 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:50.740552 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:50.787325 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:50.787357 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:53.319887 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:53.330127 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:53.330200 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:53.355125 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:53.355150 1995776 cri.go:89] found id: ""
	I1216 03:43:53.355160 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:53.355216 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:53.358969 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:53.359065 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:53.385394 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:53.385425 1995776 cri.go:89] found id: ""
	I1216 03:43:53.385435 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:53.385498 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:53.389361 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:53.389433 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:53.413533 1995776 cri.go:89] found id: ""
	I1216 03:43:53.413562 1995776 logs.go:282] 0 containers: []
	W1216 03:43:53.413570 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:53.413577 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:53.413638 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:53.437698 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:53.437724 1995776 cri.go:89] found id: ""
	I1216 03:43:53.437732 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:53.437791 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:53.441550 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:53.441622 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:53.473661 1995776 cri.go:89] found id: ""
	I1216 03:43:53.473688 1995776 logs.go:282] 0 containers: []
	W1216 03:43:53.473698 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:53.473704 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:53.473764 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:53.499079 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:53.499107 1995776 cri.go:89] found id: ""
	I1216 03:43:53.499116 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:53.499186 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:53.502826 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:53.502903 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:53.527700 1995776 cri.go:89] found id: ""
	I1216 03:43:53.527728 1995776 logs.go:282] 0 containers: []
	W1216 03:43:53.527737 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:53.527743 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:53.527808 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:53.558180 1995776 cri.go:89] found id: ""
	I1216 03:43:53.558204 1995776 logs.go:282] 0 containers: []
	W1216 03:43:53.558212 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:53.558226 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:53.558240 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:53.616927 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:53.616966 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:53.634487 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:53.634571 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:53.705659 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:53.705683 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:53.705697 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:53.741620 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:53.741655 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:53.776985 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:53.777017 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:53.811951 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:53.811986 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:53.840624 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:53.840658 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:53.872923 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:53.872953 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:56.403282 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:56.416988 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:56.417078 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:56.458368 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:56.458393 1995776 cri.go:89] found id: ""
	I1216 03:43:56.458403 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:56.458463 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:56.462663 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:56.462735 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:56.500249 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:56.500269 1995776 cri.go:89] found id: ""
	I1216 03:43:56.500277 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:56.500332 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:56.504764 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:56.504834 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:56.541702 1995776 cri.go:89] found id: ""
	I1216 03:43:56.541744 1995776 logs.go:282] 0 containers: []
	W1216 03:43:56.541753 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:56.541759 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:56.541874 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:56.593238 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:56.593261 1995776 cri.go:89] found id: ""
	I1216 03:43:56.593272 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:56.593333 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:56.597628 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:56.597700 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:43:56.629746 1995776 cri.go:89] found id: ""
	I1216 03:43:56.629770 1995776 logs.go:282] 0 containers: []
	W1216 03:43:56.629782 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:43:56.629791 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:43:56.629851 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:43:56.663083 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:56.663102 1995776 cri.go:89] found id: ""
	I1216 03:43:56.663110 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:43:56.663167 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:56.667083 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:43:56.667155 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:43:56.719494 1995776 cri.go:89] found id: ""
	I1216 03:43:56.719518 1995776 logs.go:282] 0 containers: []
	W1216 03:43:56.719527 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:43:56.719533 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:43:56.719594 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:43:56.760515 1995776 cri.go:89] found id: ""
	I1216 03:43:56.760538 1995776 logs.go:282] 0 containers: []
	W1216 03:43:56.760546 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:43:56.760558 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:43:56.760570 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:56.807482 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:43:56.807560 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:56.840120 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:43:56.840203 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:43:56.905780 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:43:56.905862 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:43:56.923021 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:43:56.923216 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:43:57.001203 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:43:57.001295 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:43:57.086164 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:43:57.086188 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:43:57.152663 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:43:57.152703 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:43:57.233090 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:43:57.233112 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:43:57.233125 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:59.769547 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:43:59.783629 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:43:59.783723 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:43:59.820817 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:43:59.820836 1995776 cri.go:89] found id: ""
	I1216 03:43:59.820844 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:43:59.820900 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:59.825360 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:43:59.825431 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:43:59.860613 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:43:59.860632 1995776 cri.go:89] found id: ""
	I1216 03:43:59.860640 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:43:59.860696 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:59.864719 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:43:59.864836 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:43:59.896703 1995776 cri.go:89] found id: ""
	I1216 03:43:59.896773 1995776 logs.go:282] 0 containers: []
	W1216 03:43:59.896798 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:43:59.896818 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:43:59.896906 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:43:59.933120 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:43:59.933186 1995776 cri.go:89] found id: ""
	I1216 03:43:59.933210 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:43:59.933297 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:43:59.948000 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:43:59.948076 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:00.072061 1995776 cri.go:89] found id: ""
	I1216 03:44:00.072105 1995776 logs.go:282] 0 containers: []
	W1216 03:44:00.072117 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:00.072125 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:00.072212 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:00.186764 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:00.186786 1995776 cri.go:89] found id: ""
	I1216 03:44:00.186795 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:00.186872 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:00.192756 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:00.192936 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:00.246848 1995776 cri.go:89] found id: ""
	I1216 03:44:00.246939 1995776 logs.go:282] 0 containers: []
	W1216 03:44:00.246976 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:00.247020 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:00.247139 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:00.312248 1995776 cri.go:89] found id: ""
	I1216 03:44:00.312338 1995776 logs.go:282] 0 containers: []
	W1216 03:44:00.312364 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:00.312410 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:00.312444 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:00.417835 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:00.417886 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:00.467857 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:00.467965 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:00.570367 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:00.570493 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:00.570526 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:00.640300 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:00.640374 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:00.676268 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:00.676347 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:00.785363 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:00.785441 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:00.858935 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:00.858970 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:00.878455 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:00.878523 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:03.439029 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:03.449625 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:03.449697 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:03.479622 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:03.479645 1995776 cri.go:89] found id: ""
	I1216 03:44:03.479654 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:03.479709 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:03.483456 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:03.483538 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:03.509392 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:03.509417 1995776 cri.go:89] found id: ""
	I1216 03:44:03.509426 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:03.509483 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:03.513322 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:03.513395 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:03.543888 1995776 cri.go:89] found id: ""
	I1216 03:44:03.543923 1995776 logs.go:282] 0 containers: []
	W1216 03:44:03.543933 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:03.543942 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:03.544006 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:03.576564 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:03.576588 1995776 cri.go:89] found id: ""
	I1216 03:44:03.576596 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:03.576653 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:03.581047 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:03.581135 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:03.620610 1995776 cri.go:89] found id: ""
	I1216 03:44:03.620637 1995776 logs.go:282] 0 containers: []
	W1216 03:44:03.620646 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:03.620652 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:03.620713 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:03.660518 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:03.660545 1995776 cri.go:89] found id: ""
	I1216 03:44:03.660554 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:03.660612 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:03.664703 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:03.664780 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:03.701813 1995776 cri.go:89] found id: ""
	I1216 03:44:03.701840 1995776 logs.go:282] 0 containers: []
	W1216 03:44:03.701849 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:03.701855 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:03.701913 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:03.764646 1995776 cri.go:89] found id: ""
	I1216 03:44:03.764673 1995776 logs.go:282] 0 containers: []
	W1216 03:44:03.764682 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:03.764697 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:03.764709 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:03.797822 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:03.797852 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:03.851456 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:03.851488 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:03.903715 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:03.903811 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:03.983277 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:03.983370 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:04.027568 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:04.027655 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:04.088372 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:04.088455 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:04.155211 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:04.155289 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:04.283814 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:04.283898 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:04.283936 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:06.858343 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:06.868403 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:06.868475 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:06.893755 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:06.893778 1995776 cri.go:89] found id: ""
	I1216 03:44:06.893786 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:06.893843 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:06.897655 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:06.897730 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:06.925663 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:06.925687 1995776 cri.go:89] found id: ""
	I1216 03:44:06.925695 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:06.925754 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:06.929795 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:06.929871 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:06.955582 1995776 cri.go:89] found id: ""
	I1216 03:44:06.955607 1995776 logs.go:282] 0 containers: []
	W1216 03:44:06.955616 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:06.955622 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:06.955692 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:06.983950 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:06.983976 1995776 cri.go:89] found id: ""
	I1216 03:44:06.983984 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:06.984041 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:06.987730 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:06.987806 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:07.013711 1995776 cri.go:89] found id: ""
	I1216 03:44:07.013738 1995776 logs.go:282] 0 containers: []
	W1216 03:44:07.013747 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:07.013754 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:07.013814 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:07.054475 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:07.054499 1995776 cri.go:89] found id: ""
	I1216 03:44:07.054507 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:07.054571 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:07.058252 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:07.058326 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:07.086165 1995776 cri.go:89] found id: ""
	I1216 03:44:07.086191 1995776 logs.go:282] 0 containers: []
	W1216 03:44:07.086201 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:07.086207 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:07.086266 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:07.112772 1995776 cri.go:89] found id: ""
	I1216 03:44:07.112803 1995776 logs.go:282] 0 containers: []
	W1216 03:44:07.112813 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:07.112826 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:07.112838 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:07.142187 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:07.142216 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:07.200719 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:07.200800 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:07.218300 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:07.218338 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:07.312996 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:07.313080 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:07.313111 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:07.368572 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:07.368671 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:07.413960 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:07.414055 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:07.489946 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:07.490048 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:07.540246 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:07.540289 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:10.101963 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:10.112816 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:10.112891 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:10.138746 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:10.138771 1995776 cri.go:89] found id: ""
	I1216 03:44:10.138780 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:10.138840 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:10.142704 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:10.142775 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:10.168367 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:10.168387 1995776 cri.go:89] found id: ""
	I1216 03:44:10.168396 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:10.168455 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:10.172279 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:10.172350 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:10.198812 1995776 cri.go:89] found id: ""
	I1216 03:44:10.198835 1995776 logs.go:282] 0 containers: []
	W1216 03:44:10.198844 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:10.198852 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:10.198910 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:10.227609 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:10.227634 1995776 cri.go:89] found id: ""
	I1216 03:44:10.227643 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:10.227702 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:10.231558 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:10.231640 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:10.256832 1995776 cri.go:89] found id: ""
	I1216 03:44:10.256859 1995776 logs.go:282] 0 containers: []
	W1216 03:44:10.256869 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:10.256876 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:10.256948 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:10.283608 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:10.283632 1995776 cri.go:89] found id: ""
	I1216 03:44:10.283640 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:10.283696 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:10.287477 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:10.287555 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:10.312786 1995776 cri.go:89] found id: ""
	I1216 03:44:10.312869 1995776 logs.go:282] 0 containers: []
	W1216 03:44:10.312885 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:10.312893 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:10.312966 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:10.338833 1995776 cri.go:89] found id: ""
	I1216 03:44:10.338859 1995776 logs.go:282] 0 containers: []
	W1216 03:44:10.338868 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:10.338882 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:10.338894 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:10.370918 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:10.370951 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:10.409350 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:10.409382 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:10.447420 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:10.447456 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:10.481547 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:10.481584 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:10.520560 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:10.520594 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:10.580032 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:10.580113 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:10.644759 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:10.644833 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:10.644860 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:10.681801 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:10.681834 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:13.199199 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:13.210385 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:13.210462 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:13.242096 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:13.242119 1995776 cri.go:89] found id: ""
	I1216 03:44:13.242128 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:13.242184 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:13.245810 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:13.245885 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:13.270645 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:13.270667 1995776 cri.go:89] found id: ""
	I1216 03:44:13.270676 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:13.270731 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:13.274208 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:13.274280 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:13.300816 1995776 cri.go:89] found id: ""
	I1216 03:44:13.300839 1995776 logs.go:282] 0 containers: []
	W1216 03:44:13.300848 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:13.300855 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:13.300918 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:13.325681 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:13.325699 1995776 cri.go:89] found id: ""
	I1216 03:44:13.325708 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:13.325761 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:13.329444 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:13.329520 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:13.354710 1995776 cri.go:89] found id: ""
	I1216 03:44:13.354782 1995776 logs.go:282] 0 containers: []
	W1216 03:44:13.354804 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:13.354825 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:13.354909 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:13.381104 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:13.381127 1995776 cri.go:89] found id: ""
	I1216 03:44:13.381135 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:13.381190 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:13.385219 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:13.385295 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:13.414879 1995776 cri.go:89] found id: ""
	I1216 03:44:13.414905 1995776 logs.go:282] 0 containers: []
	W1216 03:44:13.414914 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:13.414921 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:13.414982 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:13.444081 1995776 cri.go:89] found id: ""
	I1216 03:44:13.444110 1995776 logs.go:282] 0 containers: []
	W1216 03:44:13.444119 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:13.444135 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:13.444146 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:13.489609 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:13.489641 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:13.506481 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:13.506509 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:13.570608 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:13.570632 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:13.570646 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:13.618529 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:13.618558 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:13.655442 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:13.655470 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:13.693519 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:13.693557 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:13.730361 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:13.730397 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:13.762147 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:13.762182 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:16.328384 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:16.338473 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:16.338539 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:16.363719 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:16.363739 1995776 cri.go:89] found id: ""
	I1216 03:44:16.363748 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:16.363803 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:16.367676 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:16.367751 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:16.392306 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:16.392328 1995776 cri.go:89] found id: ""
	I1216 03:44:16.392336 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:16.392390 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:16.396044 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:16.396136 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:16.420125 1995776 cri.go:89] found id: ""
	I1216 03:44:16.420147 1995776 logs.go:282] 0 containers: []
	W1216 03:44:16.420156 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:16.420162 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:16.420223 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:16.450749 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:16.450773 1995776 cri.go:89] found id: ""
	I1216 03:44:16.450783 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:16.450841 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:16.455782 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:16.455856 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:16.481545 1995776 cri.go:89] found id: ""
	I1216 03:44:16.481568 1995776 logs.go:282] 0 containers: []
	W1216 03:44:16.481577 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:16.481583 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:16.481642 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:16.510911 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:16.510939 1995776 cri.go:89] found id: ""
	I1216 03:44:16.510949 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:16.511004 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:16.514839 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:16.514913 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:16.540136 1995776 cri.go:89] found id: ""
	I1216 03:44:16.540163 1995776 logs.go:282] 0 containers: []
	W1216 03:44:16.540172 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:16.540179 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:16.540243 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:16.568360 1995776 cri.go:89] found id: ""
	I1216 03:44:16.568383 1995776 logs.go:282] 0 containers: []
	W1216 03:44:16.568392 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:16.568406 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:16.568418 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:16.602393 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:16.602428 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:16.634596 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:16.634627 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:16.667632 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:16.667665 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:16.738631 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:16.738667 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:16.738682 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:16.773896 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:16.773926 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:16.803552 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:16.803596 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:16.845095 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:16.845123 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:16.906558 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:16.906598 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:19.424210 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:19.436295 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:19.436362 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:19.468295 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:19.468316 1995776 cri.go:89] found id: ""
	I1216 03:44:19.468324 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:19.468381 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:19.472364 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:19.472439 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:19.505494 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:19.505565 1995776 cri.go:89] found id: ""
	I1216 03:44:19.505588 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:19.505678 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:19.510444 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:19.510514 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:19.536864 1995776 cri.go:89] found id: ""
	I1216 03:44:19.536890 1995776 logs.go:282] 0 containers: []
	W1216 03:44:19.536898 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:19.536904 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:19.536966 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:19.562374 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:19.562399 1995776 cri.go:89] found id: ""
	I1216 03:44:19.562408 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:19.562494 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:19.566113 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:19.566187 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:19.591097 1995776 cri.go:89] found id: ""
	I1216 03:44:19.591125 1995776 logs.go:282] 0 containers: []
	W1216 03:44:19.591146 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:19.591153 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:19.591220 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:19.616077 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:19.616142 1995776 cri.go:89] found id: ""
	I1216 03:44:19.616168 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:19.616253 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:19.619813 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:19.619892 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:19.646651 1995776 cri.go:89] found id: ""
	I1216 03:44:19.646678 1995776 logs.go:282] 0 containers: []
	W1216 03:44:19.646686 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:19.646692 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:19.646752 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:19.672614 1995776 cri.go:89] found id: ""
	I1216 03:44:19.672641 1995776 logs.go:282] 0 containers: []
	W1216 03:44:19.672650 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:19.672666 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:19.672678 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:19.702768 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:19.702810 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:19.732541 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:19.732572 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:19.772596 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:19.772629 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:19.832393 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:19.832436 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:19.849208 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:19.849240 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:19.917049 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:19.917127 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:19.917155 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:19.952251 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:19.952281 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:19.985586 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:19.985618 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:22.533256 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:22.543627 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:22.543707 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:22.569321 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:22.569343 1995776 cri.go:89] found id: ""
	I1216 03:44:22.569353 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:22.569414 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:22.573139 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:22.573220 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:22.602629 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:22.602653 1995776 cri.go:89] found id: ""
	I1216 03:44:22.602661 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:22.602719 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:22.606442 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:22.606518 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:22.633747 1995776 cri.go:89] found id: ""
	I1216 03:44:22.633771 1995776 logs.go:282] 0 containers: []
	W1216 03:44:22.633780 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:22.633786 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:22.633850 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:22.664670 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:22.664691 1995776 cri.go:89] found id: ""
	I1216 03:44:22.664699 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:22.664757 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:22.668491 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:22.668566 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:22.694074 1995776 cri.go:89] found id: ""
	I1216 03:44:22.694099 1995776 logs.go:282] 0 containers: []
	W1216 03:44:22.694109 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:22.694116 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:22.694235 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:22.720736 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:22.720755 1995776 cri.go:89] found id: ""
	I1216 03:44:22.720764 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:22.720823 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:22.724655 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:22.724735 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:22.750601 1995776 cri.go:89] found id: ""
	I1216 03:44:22.750691 1995776 logs.go:282] 0 containers: []
	W1216 03:44:22.750714 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:22.750732 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:22.750824 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:22.777321 1995776 cri.go:89] found id: ""
	I1216 03:44:22.777349 1995776 logs.go:282] 0 containers: []
	W1216 03:44:22.777358 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:22.777374 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:22.777386 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:22.809955 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:22.809994 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:22.840261 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:22.840293 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:22.900282 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:22.900323 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:22.916528 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:22.916562 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:22.951932 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:22.951967 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:22.985984 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:22.986017 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:23.031449 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:23.031493 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:23.061547 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:23.061578 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:23.135380 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:25.636128 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:25.647993 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:25.648068 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:25.673245 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:25.673268 1995776 cri.go:89] found id: ""
	I1216 03:44:25.673277 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:25.673332 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:25.677179 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:25.677255 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:25.706432 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:25.706455 1995776 cri.go:89] found id: ""
	I1216 03:44:25.706469 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:25.706526 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:25.710323 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:25.710399 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:25.735711 1995776 cri.go:89] found id: ""
	I1216 03:44:25.735737 1995776 logs.go:282] 0 containers: []
	W1216 03:44:25.735745 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:25.735763 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:25.735824 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:25.760965 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:25.760990 1995776 cri.go:89] found id: ""
	I1216 03:44:25.760999 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:25.761060 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:25.764953 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:25.765033 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:25.789820 1995776 cri.go:89] found id: ""
	I1216 03:44:25.789846 1995776 logs.go:282] 0 containers: []
	W1216 03:44:25.789856 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:25.789862 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:25.789923 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:25.815037 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:25.815099 1995776 cri.go:89] found id: ""
	I1216 03:44:25.815108 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:25.815162 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:25.818751 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:25.818825 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:25.843335 1995776 cri.go:89] found id: ""
	I1216 03:44:25.843358 1995776 logs.go:282] 0 containers: []
	W1216 03:44:25.843366 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:25.843372 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:25.843430 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:25.868098 1995776 cri.go:89] found id: ""
	I1216 03:44:25.868121 1995776 logs.go:282] 0 containers: []
	W1216 03:44:25.868129 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:25.868142 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:25.868153 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:25.926055 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:25.926091 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:25.964763 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:25.964798 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:25.994290 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:25.994321 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:26.016336 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:26.016370 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:26.085545 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:26.085565 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:26.085579 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:26.122412 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:26.122443 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:26.156714 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:26.156748 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:26.189583 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:26.189617 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:28.731200 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:28.748433 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:28.748516 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:28.776203 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:28.776227 1995776 cri.go:89] found id: ""
	I1216 03:44:28.776236 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:28.776292 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:28.780328 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:28.780398 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:28.811330 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:28.811353 1995776 cri.go:89] found id: ""
	I1216 03:44:28.811400 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:28.811467 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:28.815852 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:28.815972 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:28.850289 1995776 cri.go:89] found id: ""
	I1216 03:44:28.850316 1995776 logs.go:282] 0 containers: []
	W1216 03:44:28.850325 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:28.850331 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:28.850394 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:28.879417 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:28.879439 1995776 cri.go:89] found id: ""
	I1216 03:44:28.879448 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:28.879504 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:28.883331 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:28.883411 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:28.908244 1995776 cri.go:89] found id: ""
	I1216 03:44:28.908321 1995776 logs.go:282] 0 containers: []
	W1216 03:44:28.908344 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:28.908363 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:28.908432 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:28.934313 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:28.934339 1995776 cri.go:89] found id: ""
	I1216 03:44:28.934348 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:28.934405 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:28.938015 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:28.938092 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:28.963853 1995776 cri.go:89] found id: ""
	I1216 03:44:28.963942 1995776 logs.go:282] 0 containers: []
	W1216 03:44:28.963967 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:28.963986 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:28.964089 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:28.993020 1995776 cri.go:89] found id: ""
	I1216 03:44:28.993096 1995776 logs.go:282] 0 containers: []
	W1216 03:44:28.993120 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:28.993148 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:28.993164 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:29.026167 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:29.026200 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:29.060728 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:29.060764 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:29.090211 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:29.090247 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:29.121566 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:29.121594 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:29.137864 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:29.137894 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:29.183923 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:29.183999 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:29.256961 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:29.257047 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:29.327272 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:29.327293 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:29.327306 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:31.861896 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:31.872709 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:31.872798 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:31.913037 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:31.913070 1995776 cri.go:89] found id: ""
	I1216 03:44:31.913080 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:31.913155 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:31.917887 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:31.917986 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:31.951461 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:31.951506 1995776 cri.go:89] found id: ""
	I1216 03:44:31.951518 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:31.951595 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:31.956145 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:31.956235 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:32.002979 1995776 cri.go:89] found id: ""
	I1216 03:44:32.003022 1995776 logs.go:282] 0 containers: []
	W1216 03:44:32.003032 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:32.003074 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:32.003163 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:32.047004 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:32.047096 1995776 cri.go:89] found id: ""
	I1216 03:44:32.047108 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:32.047220 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:32.054983 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:32.055100 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:32.094008 1995776 cri.go:89] found id: ""
	I1216 03:44:32.094050 1995776 logs.go:282] 0 containers: []
	W1216 03:44:32.094062 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:32.094081 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:32.094167 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:32.129733 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:32.129758 1995776 cri.go:89] found id: ""
	I1216 03:44:32.129768 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:32.129838 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:32.134483 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:32.134573 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:32.169911 1995776 cri.go:89] found id: ""
	I1216 03:44:32.169952 1995776 logs.go:282] 0 containers: []
	W1216 03:44:32.169962 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:32.169968 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:32.170040 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:32.239962 1995776 cri.go:89] found id: ""
	I1216 03:44:32.239989 1995776 logs.go:282] 0 containers: []
	W1216 03:44:32.239998 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:32.240012 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:32.240023 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:32.325576 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:32.325618 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:32.352801 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:32.352833 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:32.442668 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:32.442692 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:32.442705 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:32.482245 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:32.482279 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:32.524346 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:32.524394 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:32.562911 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:32.562942 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:32.591344 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:32.591376 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:32.625260 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:32.625292 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:35.158074 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:35.170001 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:35.170086 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:35.211160 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:35.211181 1995776 cri.go:89] found id: ""
	I1216 03:44:35.211190 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:35.211252 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:35.215154 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:35.215230 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:35.250464 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:35.250484 1995776 cri.go:89] found id: ""
	I1216 03:44:35.250493 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:35.250552 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:35.257619 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:35.257689 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:35.285682 1995776 cri.go:89] found id: ""
	I1216 03:44:35.285705 1995776 logs.go:282] 0 containers: []
	W1216 03:44:35.285714 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:35.285720 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:35.285778 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:35.318718 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:35.318732 1995776 cri.go:89] found id: ""
	I1216 03:44:35.318739 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:35.318783 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:35.324471 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:35.324543 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:35.360735 1995776 cri.go:89] found id: ""
	I1216 03:44:35.360759 1995776 logs.go:282] 0 containers: []
	W1216 03:44:35.360767 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:35.360773 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:35.360836 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:35.391625 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:35.391702 1995776 cri.go:89] found id: ""
	I1216 03:44:35.391726 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:35.391825 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:35.396933 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:35.397095 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:35.426727 1995776 cri.go:89] found id: ""
	I1216 03:44:35.426811 1995776 logs.go:282] 0 containers: []
	W1216 03:44:35.426843 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:35.426863 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:35.426975 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:35.479069 1995776 cri.go:89] found id: ""
	I1216 03:44:35.479092 1995776 logs.go:282] 0 containers: []
	W1216 03:44:35.479100 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:35.479115 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:35.479126 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:35.574706 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:35.574788 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:35.674722 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:35.674740 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:35.674753 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:35.725025 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:35.725096 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:35.777658 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:35.777734 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:35.822545 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:35.822576 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:35.855796 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:35.855832 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:35.874579 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:35.874610 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:35.936914 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:35.936978 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:38.477052 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:38.492193 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:38.492264 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:38.547916 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:38.547937 1995776 cri.go:89] found id: ""
	I1216 03:44:38.547946 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:38.548009 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:38.552239 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:38.552314 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:38.587382 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:38.587445 1995776 cri.go:89] found id: ""
	I1216 03:44:38.587468 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:38.587538 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:38.591725 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:38.591837 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:38.618671 1995776 cri.go:89] found id: ""
	I1216 03:44:38.618749 1995776 logs.go:282] 0 containers: []
	W1216 03:44:38.618774 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:38.618793 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:38.618871 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:38.648050 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:38.648114 1995776 cri.go:89] found id: ""
	I1216 03:44:38.648137 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:38.648208 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:38.652208 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:38.652321 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:38.697682 1995776 cri.go:89] found id: ""
	I1216 03:44:38.697748 1995776 logs.go:282] 0 containers: []
	W1216 03:44:38.697772 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:38.697791 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:38.697865 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:38.725301 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:38.725374 1995776 cri.go:89] found id: ""
	I1216 03:44:38.725397 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:38.725466 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:38.729630 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:38.729751 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:38.758804 1995776 cri.go:89] found id: ""
	I1216 03:44:38.758885 1995776 logs.go:282] 0 containers: []
	W1216 03:44:38.758909 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:38.758927 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:38.759019 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:38.789516 1995776 cri.go:89] found id: ""
	I1216 03:44:38.789592 1995776 logs.go:282] 0 containers: []
	W1216 03:44:38.789615 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:38.789641 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:38.789665 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:38.854390 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:38.854470 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:38.877934 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:38.878007 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:38.930054 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:38.930128 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:38.970206 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:38.970361 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:39.003277 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:39.003393 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:39.044094 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:39.044124 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:39.124845 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:39.124870 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:39.124883 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:39.180171 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:39.180205 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:41.735893 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:41.751938 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:41.752025 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:41.794946 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:41.794970 1995776 cri.go:89] found id: ""
	I1216 03:44:41.794979 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:41.795074 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:41.801247 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:41.801351 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:41.841888 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:41.841931 1995776 cri.go:89] found id: ""
	I1216 03:44:41.841947 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:41.842033 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:41.848151 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:41.848262 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:41.885900 1995776 cri.go:89] found id: ""
	I1216 03:44:41.885923 1995776 logs.go:282] 0 containers: []
	W1216 03:44:41.885944 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:41.885951 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:41.886027 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:41.933740 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:41.933828 1995776 cri.go:89] found id: ""
	I1216 03:44:41.933851 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:41.933956 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:41.938896 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:41.939088 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:41.978714 1995776 cri.go:89] found id: ""
	I1216 03:44:41.978801 1995776 logs.go:282] 0 containers: []
	W1216 03:44:41.978830 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:41.978878 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:41.979000 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:42.025986 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:42.026061 1995776 cri.go:89] found id: ""
	I1216 03:44:42.026084 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:42.026178 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:42.031114 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:42.031255 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:42.058440 1995776 cri.go:89] found id: ""
	I1216 03:44:42.058518 1995776 logs.go:282] 0 containers: []
	W1216 03:44:42.058543 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:42.058564 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:42.058701 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:42.097983 1995776 cri.go:89] found id: ""
	I1216 03:44:42.098071 1995776 logs.go:282] 0 containers: []
	W1216 03:44:42.098098 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:42.098154 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:42.098200 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:42.150187 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:42.150272 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:42.194661 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:42.194778 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:42.303391 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:42.303481 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:42.366017 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:42.366054 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:42.383180 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:42.383212 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:42.451029 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:42.451090 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:42.451106 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:42.484710 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:42.484742 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:42.523140 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:42.523173 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:45.056482 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:45.071344 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:45.071424 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:45.120118 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:45.120142 1995776 cri.go:89] found id: ""
	I1216 03:44:45.120152 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:45.120222 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:45.125973 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:45.126062 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:45.174299 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:45.174321 1995776 cri.go:89] found id: ""
	I1216 03:44:45.174331 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:45.174399 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:45.179954 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:45.180047 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:45.228813 1995776 cri.go:89] found id: ""
	I1216 03:44:45.229005 1995776 logs.go:282] 0 containers: []
	W1216 03:44:45.229041 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:45.229065 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:45.229173 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:45.276630 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:45.276727 1995776 cri.go:89] found id: ""
	I1216 03:44:45.276763 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:45.276871 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:45.284688 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:45.284837 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:45.321548 1995776 cri.go:89] found id: ""
	I1216 03:44:45.321578 1995776 logs.go:282] 0 containers: []
	W1216 03:44:45.321588 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:45.321595 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:45.321662 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:45.351990 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:45.352016 1995776 cri.go:89] found id: ""
	I1216 03:44:45.352026 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:45.352086 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:45.356218 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:45.356297 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:45.384103 1995776 cri.go:89] found id: ""
	I1216 03:44:45.384176 1995776 logs.go:282] 0 containers: []
	W1216 03:44:45.384199 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:45.384211 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:45.384287 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:45.416307 1995776 cri.go:89] found id: ""
	I1216 03:44:45.416332 1995776 logs.go:282] 0 containers: []
	W1216 03:44:45.416340 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:45.416357 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:45.416369 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:45.433487 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:45.433576 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:45.474193 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:45.474225 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:45.507349 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:45.507385 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:45.544120 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:45.544152 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:45.612084 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:45.612157 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:45.612185 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:45.649842 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:45.649878 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:45.692484 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:45.692518 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:45.723590 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:45.723628 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:48.293940 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:48.305911 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:48.305995 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:48.333209 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:48.333232 1995776 cri.go:89] found id: ""
	I1216 03:44:48.333242 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:48.333301 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:48.337163 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:48.337239 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:48.362936 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:48.363017 1995776 cri.go:89] found id: ""
	I1216 03:44:48.363082 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:48.363174 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:48.366916 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:48.367029 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:48.393591 1995776 cri.go:89] found id: ""
	I1216 03:44:48.393620 1995776 logs.go:282] 0 containers: []
	W1216 03:44:48.393629 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:48.393635 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:48.393693 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:48.420138 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:48.420161 1995776 cri.go:89] found id: ""
	I1216 03:44:48.420169 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:48.420227 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:48.424291 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:48.424371 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:48.449577 1995776 cri.go:89] found id: ""
	I1216 03:44:48.449602 1995776 logs.go:282] 0 containers: []
	W1216 03:44:48.449611 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:48.449618 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:48.449678 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:48.474306 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:48.474329 1995776 cri.go:89] found id: ""
	I1216 03:44:48.474338 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:48.474396 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:48.478119 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:48.478191 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:48.505260 1995776 cri.go:89] found id: ""
	I1216 03:44:48.505285 1995776 logs.go:282] 0 containers: []
	W1216 03:44:48.505294 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:48.505301 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:48.505379 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:48.531918 1995776 cri.go:89] found id: ""
	I1216 03:44:48.531949 1995776 logs.go:282] 0 containers: []
	W1216 03:44:48.531958 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:48.531978 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:48.531990 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:48.548156 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:48.548186 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:48.627237 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:48.627259 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:48.627273 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:48.662854 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:48.662884 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:48.705876 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:48.705912 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:48.737118 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:48.737156 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:48.765508 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:48.765536 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:48.825491 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:48.825530 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:48.857756 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:48.857791 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:51.398134 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:51.408262 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:51.408340 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:51.433737 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:51.433758 1995776 cri.go:89] found id: ""
	I1216 03:44:51.433766 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:51.433823 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:51.437572 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:51.437648 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:51.462509 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:51.462533 1995776 cri.go:89] found id: ""
	I1216 03:44:51.462541 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:51.462599 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:51.466411 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:51.466487 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:51.492422 1995776 cri.go:89] found id: ""
	I1216 03:44:51.492449 1995776 logs.go:282] 0 containers: []
	W1216 03:44:51.492459 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:51.492465 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:51.492546 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:51.517547 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:51.517570 1995776 cri.go:89] found id: ""
	I1216 03:44:51.517579 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:51.517658 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:51.521326 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:51.521420 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:51.547424 1995776 cri.go:89] found id: ""
	I1216 03:44:51.547450 1995776 logs.go:282] 0 containers: []
	W1216 03:44:51.547460 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:51.547466 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:51.547545 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:51.572661 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:51.572683 1995776 cri.go:89] found id: ""
	I1216 03:44:51.572692 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:51.572748 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:51.576584 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:51.576654 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:51.602119 1995776 cri.go:89] found id: ""
	I1216 03:44:51.602147 1995776 logs.go:282] 0 containers: []
	W1216 03:44:51.602163 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:51.602169 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:51.602232 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:51.628081 1995776 cri.go:89] found id: ""
	I1216 03:44:51.628107 1995776 logs.go:282] 0 containers: []
	W1216 03:44:51.628117 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:51.628134 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:51.628146 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:51.693853 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:51.693877 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:51.693889 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:51.743763 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:51.743797 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:51.779118 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:51.779197 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:51.820485 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:51.820517 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:51.854181 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:51.854211 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:51.912182 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:51.912217 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:51.928821 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:51.928855 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:51.983182 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:51.983213 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:54.522365 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:54.537145 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:54.537215 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:54.575496 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:54.575520 1995776 cri.go:89] found id: ""
	I1216 03:44:54.575529 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:54.575601 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:54.579983 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:54.580060 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:54.617272 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:54.617297 1995776 cri.go:89] found id: ""
	I1216 03:44:54.617306 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:54.617362 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:54.621233 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:54.621314 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:54.647326 1995776 cri.go:89] found id: ""
	I1216 03:44:54.647351 1995776 logs.go:282] 0 containers: []
	W1216 03:44:54.647361 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:54.647367 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:54.647427 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:54.676771 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:54.676792 1995776 cri.go:89] found id: ""
	I1216 03:44:54.676811 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:54.676871 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:54.680711 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:54.680782 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:54.706435 1995776 cri.go:89] found id: ""
	I1216 03:44:54.706462 1995776 logs.go:282] 0 containers: []
	W1216 03:44:54.706473 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:54.706480 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:54.706575 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:54.733350 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:54.733420 1995776 cri.go:89] found id: ""
	I1216 03:44:54.733442 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:54.733535 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:54.737742 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:54.737881 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:54.765480 1995776 cri.go:89] found id: ""
	I1216 03:44:54.765506 1995776 logs.go:282] 0 containers: []
	W1216 03:44:54.765515 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:54.765521 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:54.765578 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:54.794909 1995776 cri.go:89] found id: ""
	I1216 03:44:54.794934 1995776 logs.go:282] 0 containers: []
	W1216 03:44:54.794943 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:54.794956 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:54.794968 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:54.853333 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:54.853371 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:54.887352 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:54.887388 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:54.920760 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:54.920802 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:54.975497 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:54.975534 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:55.026914 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:55.026951 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:55.059979 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:55.060022 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:55.078448 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:55.078484 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:55.144348 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:55.144369 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:55.144381 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:57.675777 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:44:57.686278 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:44:57.686367 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:44:57.712502 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:57.712525 1995776 cri.go:89] found id: ""
	I1216 03:44:57.712533 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:44:57.712588 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:57.716353 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:44:57.716448 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:44:57.742715 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:44:57.742740 1995776 cri.go:89] found id: ""
	I1216 03:44:57.742749 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:44:57.742803 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:57.746684 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:44:57.746759 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:44:57.771770 1995776 cri.go:89] found id: ""
	I1216 03:44:57.771797 1995776 logs.go:282] 0 containers: []
	W1216 03:44:57.771806 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:44:57.771812 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:44:57.771891 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:44:57.796915 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:57.796937 1995776 cri.go:89] found id: ""
	I1216 03:44:57.796946 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:44:57.797007 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:57.800956 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:44:57.801037 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:44:57.826111 1995776 cri.go:89] found id: ""
	I1216 03:44:57.826188 1995776 logs.go:282] 0 containers: []
	W1216 03:44:57.826211 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:44:57.826233 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:44:57.826329 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:44:57.853303 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:57.853325 1995776 cri.go:89] found id: ""
	I1216 03:44:57.853334 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:44:57.853390 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:44:57.857183 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:44:57.857261 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:44:57.885306 1995776 cri.go:89] found id: ""
	I1216 03:44:57.885331 1995776 logs.go:282] 0 containers: []
	W1216 03:44:57.885341 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:44:57.885347 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:44:57.885406 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:44:57.910654 1995776 cri.go:89] found id: ""
	I1216 03:44:57.910682 1995776 logs.go:282] 0 containers: []
	W1216 03:44:57.910692 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:44:57.910706 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:44:57.910718 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:44:57.950506 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:44:57.950590 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:44:57.981951 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:44:57.982028 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:44:58.026227 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:44:58.026297 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:44:58.087716 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:44:58.087754 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:44:58.104854 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:44:58.104885 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:44:58.170884 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:44:58.170905 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:44:58.170918 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:44:58.204417 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:44:58.204447 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:44:58.248422 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:44:58.248454 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:00.783458 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:45:00.798387 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:45:00.798474 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:45:00.846286 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:00.846314 1995776 cri.go:89] found id: ""
	I1216 03:45:00.846323 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:45:00.846388 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:00.851524 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:45:00.851618 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:45:00.901321 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:00.901349 1995776 cri.go:89] found id: ""
	I1216 03:45:00.901358 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:45:00.901419 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:00.907301 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:45:00.907392 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:45:00.946195 1995776 cri.go:89] found id: ""
	I1216 03:45:00.946222 1995776 logs.go:282] 0 containers: []
	W1216 03:45:00.946232 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:45:00.946239 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:45:00.946302 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:45:00.986582 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:00.986606 1995776 cri.go:89] found id: ""
	I1216 03:45:00.986615 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:45:00.986680 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:00.991791 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:45:00.991884 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:45:01.027396 1995776 cri.go:89] found id: ""
	I1216 03:45:01.027426 1995776 logs.go:282] 0 containers: []
	W1216 03:45:01.027436 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:45:01.027443 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:45:01.027506 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:45:01.071302 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:01.071330 1995776 cri.go:89] found id: ""
	I1216 03:45:01.071338 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:45:01.071404 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:01.079235 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:45:01.079317 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:45:01.129144 1995776 cri.go:89] found id: ""
	I1216 03:45:01.129174 1995776 logs.go:282] 0 containers: []
	W1216 03:45:01.129183 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:45:01.129190 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:45:01.129261 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:45:01.165521 1995776 cri.go:89] found id: ""
	I1216 03:45:01.165548 1995776 logs.go:282] 0 containers: []
	W1216 03:45:01.165558 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:45:01.165573 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:45:01.165585 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:45:01.184268 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:45:01.184314 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:45:01.346479 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:45:01.346562 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:45:01.346602 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:45:01.387176 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:45:01.387215 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:45:01.431187 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:45:01.431388 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:45:01.496103 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:45:01.496140 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:01.543359 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:45:01.543397 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:01.598484 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:45:01.598518 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:01.648516 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:45:01.648556 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:04.188392 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:45:04.206963 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:45:04.207035 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:45:04.236174 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:04.236200 1995776 cri.go:89] found id: ""
	I1216 03:45:04.236209 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:45:04.236275 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:04.240663 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:45:04.240746 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:45:04.270610 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:04.270641 1995776 cri.go:89] found id: ""
	I1216 03:45:04.270651 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:45:04.270711 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:04.274603 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:45:04.274678 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:45:04.301878 1995776 cri.go:89] found id: ""
	I1216 03:45:04.301914 1995776 logs.go:282] 0 containers: []
	W1216 03:45:04.301926 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:45:04.301935 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:45:04.302012 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:45:04.329220 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:04.329244 1995776 cri.go:89] found id: ""
	I1216 03:45:04.329252 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:45:04.329308 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:04.333151 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:45:04.333222 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:45:04.362128 1995776 cri.go:89] found id: ""
	I1216 03:45:04.362155 1995776 logs.go:282] 0 containers: []
	W1216 03:45:04.362164 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:45:04.362170 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:45:04.362229 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:45:04.388512 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:04.388535 1995776 cri.go:89] found id: ""
	I1216 03:45:04.388545 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:45:04.388601 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:04.392450 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:45:04.392530 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:45:04.419542 1995776 cri.go:89] found id: ""
	I1216 03:45:04.419571 1995776 logs.go:282] 0 containers: []
	W1216 03:45:04.419581 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:45:04.419587 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:45:04.419650 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:45:04.448552 1995776 cri.go:89] found id: ""
	I1216 03:45:04.448585 1995776 logs.go:282] 0 containers: []
	W1216 03:45:04.448595 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:45:04.448614 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:45:04.448629 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:45:04.476071 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:45:04.476099 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:45:04.536767 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:45:04.536906 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:45:04.558460 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:45:04.558540 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:04.603542 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:45:04.603620 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:04.641591 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:45:04.641680 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:04.688525 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:45:04.688599 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:45:04.723971 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:45:04.724057 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:45:04.814062 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:45:04.814080 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:45:04.814093 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:07.364055 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:45:07.375028 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:45:07.375124 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:45:07.405336 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:07.405362 1995776 cri.go:89] found id: ""
	I1216 03:45:07.405370 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:45:07.405427 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:07.409351 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:45:07.409423 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:45:07.435230 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:07.435256 1995776 cri.go:89] found id: ""
	I1216 03:45:07.435265 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:45:07.435323 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:07.439174 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:45:07.439271 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:45:07.465315 1995776 cri.go:89] found id: ""
	I1216 03:45:07.465341 1995776 logs.go:282] 0 containers: []
	W1216 03:45:07.465350 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:45:07.465356 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:45:07.465414 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:45:07.496554 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:07.496577 1995776 cri.go:89] found id: ""
	I1216 03:45:07.496586 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:45:07.496642 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:07.500517 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:45:07.500596 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:45:07.525449 1995776 cri.go:89] found id: ""
	I1216 03:45:07.525474 1995776 logs.go:282] 0 containers: []
	W1216 03:45:07.525484 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:45:07.525490 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:45:07.525553 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:45:07.550555 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:07.550587 1995776 cri.go:89] found id: ""
	I1216 03:45:07.550597 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:45:07.550659 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:07.554415 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:45:07.554488 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:45:07.579189 1995776 cri.go:89] found id: ""
	I1216 03:45:07.579211 1995776 logs.go:282] 0 containers: []
	W1216 03:45:07.579220 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:45:07.579225 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:45:07.579283 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:45:07.605423 1995776 cri.go:89] found id: ""
	I1216 03:45:07.605450 1995776 logs.go:282] 0 containers: []
	W1216 03:45:07.605459 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:45:07.605473 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:45:07.605491 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:45:07.621924 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:45:07.621954 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:45:07.696175 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:45:07.696198 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:45:07.696212 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:07.730624 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:45:07.730655 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:07.772775 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:45:07.772808 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:45:07.832974 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:45:07.833013 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:07.867784 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:45:07.867815 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:07.906224 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:45:07.906257 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:45:07.936431 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:45:07.936510 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:45:10.474085 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:45:10.484591 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:45:10.484665 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:45:10.510049 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:10.510074 1995776 cri.go:89] found id: ""
	I1216 03:45:10.510082 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:45:10.510144 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:10.513932 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:45:10.514007 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:45:10.543643 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:10.543667 1995776 cri.go:89] found id: ""
	I1216 03:45:10.543676 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:45:10.543735 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:10.547723 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:45:10.547855 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:45:10.578253 1995776 cri.go:89] found id: ""
	I1216 03:45:10.578278 1995776 logs.go:282] 0 containers: []
	W1216 03:45:10.578287 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:45:10.578293 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:45:10.578356 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:45:10.606077 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:10.606097 1995776 cri.go:89] found id: ""
	I1216 03:45:10.606106 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:45:10.606170 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:10.609942 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:45:10.610015 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:45:10.635087 1995776 cri.go:89] found id: ""
	I1216 03:45:10.635156 1995776 logs.go:282] 0 containers: []
	W1216 03:45:10.635179 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:45:10.635199 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:45:10.635287 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:45:10.665783 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:10.665858 1995776 cri.go:89] found id: ""
	I1216 03:45:10.665888 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:45:10.665975 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:10.669641 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:45:10.669754 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:45:10.693992 1995776 cri.go:89] found id: ""
	I1216 03:45:10.694068 1995776 logs.go:282] 0 containers: []
	W1216 03:45:10.694091 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:45:10.694110 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:45:10.694207 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:45:10.723150 1995776 cri.go:89] found id: ""
	I1216 03:45:10.723190 1995776 logs.go:282] 0 containers: []
	W1216 03:45:10.723200 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:45:10.723214 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:45:10.723228 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:45:10.782852 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:45:10.782889 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:45:10.799431 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:45:10.799460 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:45:10.871082 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:45:10.871103 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:45:10.871117 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:10.904964 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:45:10.904996 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:10.937048 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:45:10.937080 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:10.975565 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:45:10.975600 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:11.030585 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:45:11.030872 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:45:11.062296 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:45:11.062332 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:45:13.600309 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:45:13.611231 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:45:13.611304 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:45:13.637653 1995776 cri.go:89] found id: "4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:13.637679 1995776 cri.go:89] found id: ""
	I1216 03:45:13.637688 1995776 logs.go:282] 1 containers: [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405]
	I1216 03:45:13.637743 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:13.641416 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:45:13.641494 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:45:13.667626 1995776 cri.go:89] found id: "eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:13.667650 1995776 cri.go:89] found id: ""
	I1216 03:45:13.667659 1995776 logs.go:282] 1 containers: [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73]
	I1216 03:45:13.667736 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:13.671602 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:45:13.671679 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:45:13.697187 1995776 cri.go:89] found id: ""
	I1216 03:45:13.697214 1995776 logs.go:282] 0 containers: []
	W1216 03:45:13.697224 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:45:13.697230 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:45:13.697288 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:45:13.727147 1995776 cri.go:89] found id: "403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:13.727171 1995776 cri.go:89] found id: ""
	I1216 03:45:13.727180 1995776 logs.go:282] 1 containers: [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28]
	I1216 03:45:13.727243 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:13.731114 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:45:13.731196 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:45:13.756185 1995776 cri.go:89] found id: ""
	I1216 03:45:13.756212 1995776 logs.go:282] 0 containers: []
	W1216 03:45:13.756221 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:45:13.756227 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:45:13.756287 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:45:13.782409 1995776 cri.go:89] found id: "bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:13.782436 1995776 cri.go:89] found id: ""
	I1216 03:45:13.782444 1995776 logs.go:282] 1 containers: [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907]
	I1216 03:45:13.782511 1995776 ssh_runner.go:195] Run: which crictl
	I1216 03:45:13.786609 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:45:13.786726 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:45:13.811204 1995776 cri.go:89] found id: ""
	I1216 03:45:13.811228 1995776 logs.go:282] 0 containers: []
	W1216 03:45:13.811237 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:45:13.811243 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:45:13.811306 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:45:13.837796 1995776 cri.go:89] found id: ""
	I1216 03:45:13.837822 1995776 logs.go:282] 0 containers: []
	W1216 03:45:13.837832 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:45:13.837845 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:45:13.837856 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:45:13.907294 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:45:13.907317 1995776 logs.go:123] Gathering logs for kube-apiserver [4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405] ...
	I1216 03:45:13.907331 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405"
	I1216 03:45:13.944248 1995776 logs.go:123] Gathering logs for etcd [eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73] ...
	I1216 03:45:13.944323 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73"
	I1216 03:45:13.993082 1995776 logs.go:123] Gathering logs for kube-scheduler [403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28] ...
	I1216 03:45:13.993158 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28"
	I1216 03:45:14.029088 1995776 logs.go:123] Gathering logs for kube-controller-manager [bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907] ...
	I1216 03:45:14.029121 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907"
	I1216 03:45:14.064615 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:45:14.064690 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 03:45:14.096523 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:45:14.096571 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:45:14.113627 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:45:14.113654 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:45:14.142300 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:45:14.142329 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:45:16.704801 1995776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:45:16.715307 1995776 kubeadm.go:602] duration metric: took 4m3.412532667s to restartPrimaryControlPlane
	W1216 03:45:16.715382 1995776 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 03:45:16.715456 1995776 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:45:17.191950 1995776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:45:17.208599 1995776 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:45:17.216868 1995776 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:45:17.216933 1995776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:45:17.224951 1995776 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:45:17.224972 1995776 kubeadm.go:158] found existing configuration files:
	
	I1216 03:45:17.225026 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:45:17.232797 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:45:17.232895 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:45:17.240473 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:45:17.248021 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:45:17.248088 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:45:17.255490 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:45:17.263764 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:45:17.263855 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:45:17.271256 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:45:17.279002 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:45:17.279094 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:45:17.286666 1995776 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:45:17.338705 1995776 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:45:17.338986 1995776 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:45:17.414802 1995776 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:45:17.414879 1995776 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:45:17.414919 1995776 kubeadm.go:319] OS: Linux
	I1216 03:45:17.414970 1995776 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:45:17.415027 1995776 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:45:17.415101 1995776 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:45:17.415156 1995776 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:45:17.415211 1995776 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:45:17.415263 1995776 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:45:17.415312 1995776 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:45:17.415365 1995776 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:45:17.415415 1995776 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:45:17.482192 1995776 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:45:17.482320 1995776 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:45:17.482421 1995776 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:45:24.682121 1995776 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:45:24.685089 1995776 out.go:252]   - Generating certificates and keys ...
	I1216 03:45:24.685182 1995776 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:45:24.685252 1995776 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:45:24.685338 1995776 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:45:24.685402 1995776 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:45:24.685478 1995776 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:45:24.685536 1995776 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:45:24.685603 1995776 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:45:24.685669 1995776 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:45:24.685931 1995776 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:45:24.686196 1995776 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:45:24.686403 1995776 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:45:24.686466 1995776 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:45:24.820895 1995776 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:45:25.170804 1995776 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:45:25.297357 1995776 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:45:25.444534 1995776 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:45:25.577552 1995776 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:45:25.578243 1995776 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:45:25.580976 1995776 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:45:25.584267 1995776 out.go:252]   - Booting up control plane ...
	I1216 03:45:25.584367 1995776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:45:25.584441 1995776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:45:25.584505 1995776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:45:25.607474 1995776 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:45:25.607843 1995776 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:45:25.616972 1995776 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:45:25.618086 1995776 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:45:25.618135 1995776 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:45:25.804135 1995776 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:45:25.804270 1995776 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:49:25.804832 1995776 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117597s
	I1216 03:49:25.804867 1995776 kubeadm.go:319] 
	I1216 03:49:25.804922 1995776 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:49:25.804958 1995776 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:49:25.805059 1995776 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:49:25.805074 1995776 kubeadm.go:319] 
	I1216 03:49:25.805173 1995776 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:49:25.805208 1995776 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:49:25.805237 1995776 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:49:25.805241 1995776 kubeadm.go:319] 
	I1216 03:49:25.809240 1995776 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:49:25.809639 1995776 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:49:25.809740 1995776 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:49:25.809989 1995776 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 03:49:25.809995 1995776 kubeadm.go:319] 
	I1216 03:49:25.810060 1995776 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 03:49:25.810159 1995776 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117597s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117597s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 03:49:25.810234 1995776 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:49:26.264528 1995776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:49:26.283976 1995776 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:49:26.284046 1995776 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:49:26.295197 1995776 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:49:26.295270 1995776 kubeadm.go:158] found existing configuration files:
	
	I1216 03:49:26.295363 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:49:26.304811 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:49:26.304874 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:49:26.313880 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:49:26.323976 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:49:26.324047 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:49:26.333289 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:49:26.345096 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:49:26.345240 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:49:26.355498 1995776 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:49:26.365476 1995776 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:49:26.365550 1995776 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:49:26.374736 1995776 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:49:26.426936 1995776 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:49:26.427705 1995776 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:49:26.532544 1995776 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:49:26.532612 1995776 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:49:26.532646 1995776 kubeadm.go:319] OS: Linux
	I1216 03:49:26.532688 1995776 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:49:26.532733 1995776 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:49:26.532778 1995776 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:49:26.532823 1995776 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:49:26.532869 1995776 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:49:26.532914 1995776 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:49:26.532957 1995776 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:49:26.533003 1995776 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:49:26.533047 1995776 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:49:26.624540 1995776 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:49:26.624653 1995776 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:49:26.624745 1995776 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:49:26.632369 1995776 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:49:26.635806 1995776 out.go:252]   - Generating certificates and keys ...
	I1216 03:49:26.635973 1995776 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:49:26.636072 1995776 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:49:26.636241 1995776 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:49:26.636319 1995776 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:49:26.636405 1995776 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:49:26.636880 1995776 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:49:26.637486 1995776 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:49:26.638040 1995776 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:49:26.638630 1995776 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:49:26.639268 1995776 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:49:26.639796 1995776 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:49:26.640079 1995776 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:49:27.255365 1995776 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:49:27.669178 1995776 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:49:27.899899 1995776 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:49:28.016043 1995776 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:49:28.315621 1995776 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:49:28.316681 1995776 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:49:28.319825 1995776 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:49:28.323173 1995776 out.go:252]   - Booting up control plane ...
	I1216 03:49:28.323270 1995776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:49:28.326189 1995776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:49:28.328072 1995776 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:49:28.351741 1995776 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:49:28.351870 1995776 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:49:28.359388 1995776 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:49:28.359690 1995776 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:49:28.359905 1995776 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:49:28.538938 1995776 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:49:28.539115 1995776 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:53:28.538108 1995776 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001070328s
	I1216 03:53:28.538152 1995776 kubeadm.go:319] 
	I1216 03:53:28.538215 1995776 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:53:28.538256 1995776 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:53:28.538382 1995776 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:53:28.538392 1995776 kubeadm.go:319] 
	I1216 03:53:28.538501 1995776 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:53:28.538547 1995776 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:53:28.538590 1995776 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:53:28.538598 1995776 kubeadm.go:319] 
	I1216 03:53:28.543976 1995776 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:53:28.544397 1995776 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:53:28.544505 1995776 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:53:28.544768 1995776 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 03:53:28.544774 1995776 kubeadm.go:319] 
	I1216 03:53:28.544842 1995776 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 03:53:28.544892 1995776 kubeadm.go:403] duration metric: took 12m15.303217859s to StartCluster
	I1216 03:53:28.544935 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 03:53:28.544992 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 03:53:28.590087 1995776 cri.go:89] found id: ""
	I1216 03:53:28.590114 1995776 logs.go:282] 0 containers: []
	W1216 03:53:28.590122 1995776 logs.go:284] No container was found matching "kube-apiserver"
	I1216 03:53:28.590130 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 03:53:28.590203 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 03:53:28.635121 1995776 cri.go:89] found id: ""
	I1216 03:53:28.635149 1995776 logs.go:282] 0 containers: []
	W1216 03:53:28.635159 1995776 logs.go:284] No container was found matching "etcd"
	I1216 03:53:28.635171 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 03:53:28.635239 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 03:53:28.673278 1995776 cri.go:89] found id: ""
	I1216 03:53:28.673300 1995776 logs.go:282] 0 containers: []
	W1216 03:53:28.673308 1995776 logs.go:284] No container was found matching "coredns"
	I1216 03:53:28.673314 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 03:53:28.673370 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 03:53:28.711406 1995776 cri.go:89] found id: ""
	I1216 03:53:28.711488 1995776 logs.go:282] 0 containers: []
	W1216 03:53:28.711500 1995776 logs.go:284] No container was found matching "kube-scheduler"
	I1216 03:53:28.711507 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 03:53:28.711623 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 03:53:28.757206 1995776 cri.go:89] found id: ""
	I1216 03:53:28.757232 1995776 logs.go:282] 0 containers: []
	W1216 03:53:28.757249 1995776 logs.go:284] No container was found matching "kube-proxy"
	I1216 03:53:28.757263 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 03:53:28.757326 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 03:53:28.793316 1995776 cri.go:89] found id: ""
	I1216 03:53:28.793340 1995776 logs.go:282] 0 containers: []
	W1216 03:53:28.793355 1995776 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 03:53:28.793362 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 03:53:28.793427 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 03:53:28.837054 1995776 cri.go:89] found id: ""
	I1216 03:53:28.837079 1995776 logs.go:282] 0 containers: []
	W1216 03:53:28.837088 1995776 logs.go:284] No container was found matching "kindnet"
	I1216 03:53:28.837094 1995776 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1216 03:53:28.837173 1995776 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 03:53:28.890727 1995776 cri.go:89] found id: ""
	I1216 03:53:28.890802 1995776 logs.go:282] 0 containers: []
	W1216 03:53:28.890852 1995776 logs.go:284] No container was found matching "storage-provisioner"
	I1216 03:53:28.890876 1995776 logs.go:123] Gathering logs for container status ...
	I1216 03:53:28.890907 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 03:53:28.937679 1995776 logs.go:123] Gathering logs for kubelet ...
	I1216 03:53:28.937755 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 03:53:29.013339 1995776 logs.go:123] Gathering logs for dmesg ...
	I1216 03:53:29.013380 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 03:53:29.030919 1995776 logs.go:123] Gathering logs for describe nodes ...
	I1216 03:53:29.030999 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 03:53:29.095640 1995776 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 03:53:29.095664 1995776 logs.go:123] Gathering logs for containerd ...
	I1216 03:53:29.095676 1995776 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1216 03:53:29.138705 1995776 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001070328s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 03:53:29.138771 1995776 out.go:285] * 
	* 
	W1216 03:53:29.138847 1995776 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001070328s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001070328s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:53:29.138865 1995776 out.go:285] * 
	* 
	W1216 03:53:29.141009 1995776 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 03:53:29.146687 1995776 out.go:203] 
	W1216 03:53:29.150363 1995776 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001070328s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001070328s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 03:53:29.150415 1995776 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 03:53:29.150437 1995776 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 03:53:29.154346 1995776 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-271074 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-271074 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-271074 version --output=json: exit status 1 (123.145836ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-16 03:53:30.221514719 +0000 UTC m=+4987.085178806
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-271074
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-271074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c765643c4b78d2d76797e0dd04cf0dfeb5931bc0877196d890db2c4d3a522df2",
	        "Created": "2025-12-16T03:40:23.91685954Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1995904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:40:55.473420393Z",
	            "FinishedAt": "2025-12-16T03:40:54.437459606Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/c765643c4b78d2d76797e0dd04cf0dfeb5931bc0877196d890db2c4d3a522df2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c765643c4b78d2d76797e0dd04cf0dfeb5931bc0877196d890db2c4d3a522df2/hostname",
	        "HostsPath": "/var/lib/docker/containers/c765643c4b78d2d76797e0dd04cf0dfeb5931bc0877196d890db2c4d3a522df2/hosts",
	        "LogPath": "/var/lib/docker/containers/c765643c4b78d2d76797e0dd04cf0dfeb5931bc0877196d890db2c4d3a522df2/c765643c4b78d2d76797e0dd04cf0dfeb5931bc0877196d890db2c4d3a522df2-json.log",
	        "Name": "/kubernetes-upgrade-271074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-271074:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-271074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c765643c4b78d2d76797e0dd04cf0dfeb5931bc0877196d890db2c4d3a522df2",
	                "LowerDir": "/var/lib/docker/overlay2/3debe03947459eb4d03fb458222103f68b10ce81d97b9da6f01a2d27b7603948-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3debe03947459eb4d03fb458222103f68b10ce81d97b9da6f01a2d27b7603948/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3debe03947459eb4d03fb458222103f68b10ce81d97b9da6f01a2d27b7603948/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3debe03947459eb4d03fb458222103f68b10ce81d97b9da6f01a2d27b7603948/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-271074",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-271074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-271074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-271074",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-271074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2fa8f46ce1d533b9a6e3ea11d5ea4c2f3bce1073fdf0299a743fc10130146467",
	            "SandboxKey": "/var/run/docker/netns/2fa8f46ce1d5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34579"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34580"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34581"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34582"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-271074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:6d:e0:aa:d1:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d833ec65ea9c38113278cc23d71bc31e4e6103cbc7585f50d178ae1b27eacb75",
	                    "EndpointID": "4184d23d699610676e1d2596eadf0ba5d504956d0639767dd238e33691b92ea1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-271074",
	                        "c765643c4b78"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-271074 -n kubernetes-upgrade-271074
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-271074 -n kubernetes-upgrade-271074: exit status 2 (467.684476ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-271074 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-271074 logs -n 25: (1.016898794s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                              ARGS                                                                                                               │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-167684 sudo systemctl cat docker --no-pager                                                                                                                                                                           │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo cat /etc/docker/daemon.json                                                                                                                                                                               │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo docker system info                                                                                                                                                                                        │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                       │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                       │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                  │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                            │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo cri-dockerd --version                                                                                                                                                                                     │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                       │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo systemctl cat containerd --no-pager                                                                                                                                                                       │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo cat /etc/containerd/config.toml                                                                                                                                                                           │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo containerd config dump                                                                                                                                                                                    │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo systemctl status crio --all --full --no-pager                                                                                                                                                             │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo systemctl cat crio --no-pager                                                                                                                                                                             │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                   │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ ssh     │ -p cilium-167684 sudo crio config                                                                                                                                                                                               │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │                     │
	│ delete  │ -p cilium-167684                                                                                                                                                                                                                │ cilium-167684            │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │ 16 Dec 25 03:49 UTC │
	│ start   │ -p force-systemd-env-986662 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                │ force-systemd-env-986662 │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │ 16 Dec 25 03:49 UTC │
	│ ssh     │ force-systemd-env-986662 ssh cat /etc/containerd/config.toml                                                                                                                                                                    │ force-systemd-env-986662 │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │ 16 Dec 25 03:49 UTC │
	│ delete  │ -p force-systemd-env-986662                                                                                                                                                                                                     │ force-systemd-env-986662 │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │ 16 Dec 25 03:49 UTC │
	│ start   │ -p cert-expiration-774320 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                    │ cert-expiration-774320   │ jenkins │ v1.37.0 │ 16 Dec 25 03:49 UTC │ 16 Dec 25 03:50 UTC │
	│ start   │ -p cert-expiration-774320 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                 │ cert-expiration-774320   │ jenkins │ v1.37.0 │ 16 Dec 25 03:53 UTC │ 16 Dec 25 03:53 UTC │
	│ delete  │ -p cert-expiration-774320                                                                                                                                                                                                       │ cert-expiration-774320   │ jenkins │ v1.37.0 │ 16 Dec 25 03:53 UTC │ 16 Dec 25 03:53 UTC │
	│ start   │ -p cert-options-059203 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd │ cert-options-059203      │ jenkins │ v1.37.0 │ 16 Dec 25 03:53 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:53:30
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:53:30.759712 2041737 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:53:30.759844 2041737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:53:30.759853 2041737 out.go:374] Setting ErrFile to fd 2...
	I1216 03:53:30.759856 2041737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:53:30.760117 2041737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:53:30.760630 2041737 out.go:368] Setting JSON to false
	I1216 03:53:30.761616 2041737 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34555,"bootTime":1765822656,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:53:30.761687 2041737 start.go:143] virtualization:  
	I1216 03:53:30.766040 2041737 out.go:179] * [cert-options-059203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 03:53:30.769567 2041737 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:53:30.769647 2041737 notify.go:221] Checking for updates...
	I1216 03:53:30.776162 2041737 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:53:30.779511 2041737 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:53:30.783535 2041737 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:53:30.787463 2041737 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:53:30.791400 2041737 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 03:45:21 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:21.908072618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:45:21 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:21.909249017Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" with image id \"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6\", size \"15391364\" in 1.099494153s"
	Dec 16 03:45:21 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:21.909287834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" returns image reference \"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\""
	Dec 16 03:45:21 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:21.910769128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 16 03:45:23 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:23.442863657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:45:23 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:23.444703786Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.0-beta.0: active requests=0, bytes read=22430652"
	Dec 16 03:45:23 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:23.447031730Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:45:23 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:23.450658803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:45:23 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:23.451395819Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" with image id \"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\", repo tag \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a\", size \"22429671\" in 1.54058912s"
	Dec 16 03:45:23 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:23.451523356Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" returns image reference \"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\""
	Dec 16 03:45:23 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:23.452474284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 16 03:45:24 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:24.669594321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:45:24 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:24.671445559Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=20453241"
	Dec 16 03:45:24 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:24.673980702Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:45:24 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:24.678535868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:45:24 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:24.679801216Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.227214673s"
	Dec 16 03:45:24 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:45:24.679940716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 16 03:50:17 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:50:17.129721767Z" level=info msg="container event discarded" container=4ed4ce6f07e12ae5b6702db7945f71d078c8b86174e6287a53333a643fa94405 type=CONTAINER_DELETED_EVENT
	Dec 16 03:50:17 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:50:17.144044842Z" level=info msg="container event discarded" container=b368d6bc70a62b830a133efa05c6124632b1bc07875210475a55c6aa57515b8b type=CONTAINER_DELETED_EVENT
	Dec 16 03:50:17 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:50:17.155253405Z" level=info msg="container event discarded" container=403937324b17371ee052523dbf8b2d4fe800f2c61a459b08f3c1d830f3b01e28 type=CONTAINER_DELETED_EVENT
	Dec 16 03:50:17 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:50:17.155301790Z" level=info msg="container event discarded" container=2e1d2611a9f2d540b245477152bdfa58a7dd8e1e6f548a2109484f69156da29c type=CONTAINER_DELETED_EVENT
	Dec 16 03:50:17 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:50:17.175676249Z" level=info msg="container event discarded" container=bd602a539e03cd7ab9abf6be97b494bce88f06b992aa36caa39a905c1597b907 type=CONTAINER_DELETED_EVENT
	Dec 16 03:50:17 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:50:17.175740124Z" level=info msg="container event discarded" container=93fe4551935c0cdac699a996cb58bf9ede6d628e303da3fa33eb8e99ea45588a type=CONTAINER_DELETED_EVENT
	Dec 16 03:50:17 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:50:17.188996426Z" level=info msg="container event discarded" container=eb260a3238d89ff9c492b639edabbef7787583a2724ca273346eb4e8fe4d7a73 type=CONTAINER_DELETED_EVENT
	Dec 16 03:50:17 kubernetes-upgrade-271074 containerd[555]: time="2025-12-16T03:50:17.189073938Z" level=info msg="container event discarded" container=a88de8afddffe6915f1c118f9fb6edf44795fc6f1e7eb6742b58b3db60ad2843 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 03:53:31 up  9:35,  0 user,  load average: 2.11, 1.76, 1.95
	Linux kubernetes-upgrade-271074 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 03:53:28 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:53:28 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 03:53:28 kubernetes-upgrade-271074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:53:28 kubernetes-upgrade-271074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:53:29 kubernetes-upgrade-271074 kubelet[14400]: E1216 03:53:28.999849   14400 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:53:29 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:53:29 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:53:29 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 03:53:29 kubernetes-upgrade-271074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:53:29 kubernetes-upgrade-271074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:53:29 kubernetes-upgrade-271074 kubelet[14419]: E1216 03:53:29.854010   14419 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:53:29 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:53:29 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:53:30 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 03:53:30 kubernetes-upgrade-271074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:53:30 kubernetes-upgrade-271074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:53:30 kubernetes-upgrade-271074 kubelet[14437]: E1216 03:53:30.620323   14437 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:53:30 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:53:30 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 03:53:31 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 16 03:53:31 kubernetes-upgrade-271074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:53:31 kubernetes-upgrade-271074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 03:53:31 kubernetes-upgrade-271074 kubelet[14525]: E1216 03:53:31.553258   14525 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 03:53:31 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 03:53:31 kubernetes-upgrade-271074 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-271074 -n kubernetes-upgrade-271074
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-271074 -n kubernetes-upgrade-271074: exit status 2 (445.391948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-271074" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-271074" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-271074
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-271074: (3.068145565s)
--- FAIL: TestKubernetesUpgrade (800.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (512.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m31.040109227s)

                                                
                                                
-- stdout --
	* [no-preload-255023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-255023" primary control-plane node in "no-preload-255023" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:54:14.422652 2047247 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:54:14.422812 2047247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:54:14.422819 2047247 out.go:374] Setting ErrFile to fd 2...
	I1216 03:54:14.422824 2047247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:54:14.423121 2047247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:54:14.423544 2047247 out.go:368] Setting JSON to false
	I1216 03:54:14.424582 2047247 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34599,"bootTime":1765822656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:54:14.424656 2047247 start.go:143] virtualization:  
	I1216 03:54:14.428465 2047247 out.go:179] * [no-preload-255023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 03:54:14.432367 2047247 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:54:14.432440 2047247 notify.go:221] Checking for updates...
	I1216 03:54:14.438451 2047247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:54:14.441372 2047247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:54:14.444346 2047247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:54:14.447283 2047247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:54:14.450090 2047247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:54:14.453503 2047247 config.go:182] Loaded profile config "old-k8s-version-580645": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1216 03:54:14.453603 2047247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:54:14.490004 2047247 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 03:54:14.490143 2047247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:54:14.606173 2047247 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:54:14.596007807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:54:14.606281 2047247 docker.go:319] overlay module found
	I1216 03:54:14.611249 2047247 out.go:179] * Using the docker driver based on user configuration
	I1216 03:54:14.614097 2047247 start.go:309] selected driver: docker
	I1216 03:54:14.614114 2047247 start.go:927] validating driver "docker" against <nil>
	I1216 03:54:14.614127 2047247 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:54:14.614808 2047247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:54:14.699885 2047247 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:54:14.690890992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:54:14.700044 2047247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:54:14.700283 2047247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:54:14.703197 2047247 out.go:179] * Using Docker driver with root privileges
	I1216 03:54:14.706211 2047247 cni.go:84] Creating CNI manager for ""
	I1216 03:54:14.706279 2047247 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 03:54:14.706295 2047247 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:54:14.706412 2047247 start.go:353] cluster config:
	{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:54:14.711379 2047247 out.go:179] * Starting "no-preload-255023" primary control-plane node in "no-preload-255023" cluster
	I1216 03:54:14.714160 2047247 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 03:54:14.717173 2047247 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 03:54:14.720051 2047247 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 03:54:14.720156 2047247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 03:54:14.720186 2047247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 03:54:14.720218 2047247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json: {Name:mk2c03ae69ae0d8a0d8cf4f30c55686f45063a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:54:14.720496 2047247 cache.go:107] acquiring lock: {Name:mk0450325aacc7460afde2487596c0895eb14316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.720564 2047247 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 03:54:14.720579 2047247 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.247µs
	I1216 03:54:14.720592 2047247 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 03:54:14.720608 2047247 cache.go:107] acquiring lock: {Name:mk6b703a23a3ab5a8bd9af36cf3a59f27d4e1f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.720720 2047247 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:14.721102 2047247 cache.go:107] acquiring lock: {Name:mk60dd72305503c0ea2e16b1d16ccd8081a54f90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.721256 2047247 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:14.721499 2047247 cache.go:107] acquiring lock: {Name:mk6fa36dfa510ec7b8233463c2d901c70484a816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.721635 2047247 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:14.721904 2047247 cache.go:107] acquiring lock: {Name:mkc870fc6c12b387ee25e1b9ca9a320632395941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.721999 2047247 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:14.722205 2047247 cache.go:107] acquiring lock: {Name:mk91af5531a8fba3ae1331bf11e776d4365c8b42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.722287 2047247 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1216 03:54:14.722302 2047247 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 101.282µs
	I1216 03:54:14.722310 2047247 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1216 03:54:14.722328 2047247 cache.go:107] acquiring lock: {Name:mk65b0b8ff216fe2e0c76a8328b4837c4b65b152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.722386 2047247 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1216 03:54:14.722400 2047247 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 73.237µs
	I1216 03:54:14.722407 2047247 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1216 03:54:14.722439 2047247 cache.go:107] acquiring lock: {Name:mke4e5785550dce8ce0ae772cb7060b431e39dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.722509 2047247 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:14.724349 2047247 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:14.724794 2047247 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:14.725541 2047247 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:14.726393 2047247 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:14.727237 2047247 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:14.744023 2047247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 03:54:14.744051 2047247 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 03:54:14.744068 2047247 cache.go:243] Successfully downloaded all kic artifacts
	I1216 03:54:14.744102 2047247 start.go:360] acquireMachinesLock for no-preload-255023: {Name:mkc3fbe159f35ba61346866b1384afc1dc23074c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:54:14.744211 2047247 start.go:364] duration metric: took 89.532µs to acquireMachinesLock for "no-preload-255023"
	I1216 03:54:14.744246 2047247 start.go:93] Provisioning new machine with config: &{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 03:54:14.744398 2047247 start.go:125] createHost starting for "" (driver="docker")
	I1216 03:54:14.748004 2047247 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 03:54:14.748262 2047247 start.go:159] libmachine.API.Create for "no-preload-255023" (driver="docker")
	I1216 03:54:14.748298 2047247 client.go:173] LocalClient.Create starting
	I1216 03:54:14.748376 2047247 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 03:54:14.748423 2047247 main.go:143] libmachine: Decoding PEM data...
	I1216 03:54:14.748447 2047247 main.go:143] libmachine: Parsing certificate...
	I1216 03:54:14.748520 2047247 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 03:54:14.748544 2047247 main.go:143] libmachine: Decoding PEM data...
	I1216 03:54:14.748556 2047247 main.go:143] libmachine: Parsing certificate...
	I1216 03:54:14.748932 2047247 cli_runner.go:164] Run: docker network inspect no-preload-255023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 03:54:14.768372 2047247 cli_runner.go:211] docker network inspect no-preload-255023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 03:54:14.768516 2047247 network_create.go:284] running [docker network inspect no-preload-255023] to gather additional debugging logs...
	I1216 03:54:14.768542 2047247 cli_runner.go:164] Run: docker network inspect no-preload-255023
	W1216 03:54:14.793216 2047247 cli_runner.go:211] docker network inspect no-preload-255023 returned with exit code 1
	I1216 03:54:14.793247 2047247 network_create.go:287] error running [docker network inspect no-preload-255023]: docker network inspect no-preload-255023: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-255023 not found
	I1216 03:54:14.793275 2047247 network_create.go:289] output of [docker network inspect no-preload-255023]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-255023 not found
	
	** /stderr **
	I1216 03:54:14.793382 2047247 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:54:14.811865 2047247 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
	I1216 03:54:14.812246 2047247 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d705cdcdbc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:12:e3:47:7f:d3} reservation:<nil>}
	I1216 03:54:14.812487 2047247 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9eafaf3b4a19 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e2:6e:50:29:6c:d7} reservation:<nil>}
	I1216 03:54:14.812791 2047247 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d3c94d2c89e9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:cb:cd:2b:97:c1} reservation:<nil>}
	I1216 03:54:14.813234 2047247 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b63020}
	I1216 03:54:14.813258 2047247 network_create.go:124] attempt to create docker network no-preload-255023 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 03:54:14.813319 2047247 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-255023 no-preload-255023
	I1216 03:54:14.882912 2047247 network_create.go:108] docker network no-preload-255023 192.168.85.0/24 created
	I1216 03:54:14.882954 2047247 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-255023" container
	I1216 03:54:14.883105 2047247 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 03:54:14.900235 2047247 cli_runner.go:164] Run: docker volume create no-preload-255023 --label name.minikube.sigs.k8s.io=no-preload-255023 --label created_by.minikube.sigs.k8s.io=true
	I1216 03:54:14.919671 2047247 oci.go:103] Successfully created a docker volume no-preload-255023
	I1216 03:54:14.919771 2047247 cli_runner.go:164] Run: docker run --rm --name no-preload-255023-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-255023 --entrypoint /usr/bin/test -v no-preload-255023:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 03:54:15.095652 2047247 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1216 03:54:15.100859 2047247 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1216 03:54:15.106147 2047247 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1216 03:54:15.113401 2047247 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1216 03:54:15.124118 2047247 cache.go:162] opening:  /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1216 03:54:15.565288 2047247 cache.go:157] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1216 03:54:15.565312 2047247 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 843.411157ms
	I1216 03:54:15.565323 2047247 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1216 03:54:15.714577 2047247 oci.go:107] Successfully prepared a docker volume no-preload-255023
	I1216 03:54:15.714635 2047247 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1216 03:54:15.714765 2047247 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 03:54:15.714864 2047247 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 03:54:15.793380 2047247 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-255023 --name no-preload-255023 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-255023 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-255023 --network no-preload-255023 --ip 192.168.85.2 --volume no-preload-255023:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 03:54:16.104010 2047247 cache.go:157] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1216 03:54:16.104078 2047247 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.382583446s
	I1216 03:54:16.104105 2047247 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 03:54:16.180379 2047247 cache.go:157] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1216 03:54:16.180406 2047247 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.457965552s
	I1216 03:54:16.180418 2047247 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1216 03:54:16.226768 2047247 cache.go:157] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1216 03:54:16.226805 2047247 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.506196004s
	I1216 03:54:16.226821 2047247 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 03:54:16.240066 2047247 cache.go:157] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1216 03:54:16.240157 2047247 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.519058035s
	I1216 03:54:16.240186 2047247 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 03:54:16.240216 2047247 cache.go:87] Successfully saved all images to host disk.
	I1216 03:54:16.281381 2047247 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Running}}
	I1216 03:54:16.307432 2047247 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 03:54:16.332819 2047247 cli_runner.go:164] Run: docker exec no-preload-255023 stat /var/lib/dpkg/alternatives/iptables
	I1216 03:54:16.388722 2047247 oci.go:144] the created container "no-preload-255023" has a running status.
	I1216 03:54:16.388753 2047247 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa...
	I1216 03:54:16.680701 2047247 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 03:54:16.706373 2047247 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 03:54:16.726817 2047247 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 03:54:16.726838 2047247 kic_runner.go:114] Args: [docker exec --privileged no-preload-255023 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 03:54:16.783269 2047247 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 03:54:16.816948 2047247 machine.go:94] provisionDockerMachine start ...
	I1216 03:54:16.817043 2047247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 03:54:16.849994 2047247 main.go:143] libmachine: Using SSH client type: native
	I1216 03:54:16.850328 2047247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34629 <nil> <nil>}
	I1216 03:54:16.850339 2047247 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:54:16.851013 2047247 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51720->127.0.0.1:34629: read: connection reset by peer
	I1216 03:54:19.986782 2047247 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 03:54:19.986807 2047247 ubuntu.go:182] provisioning hostname "no-preload-255023"
	I1216 03:54:19.986878 2047247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 03:54:20.019946 2047247 main.go:143] libmachine: Using SSH client type: native
	I1216 03:54:20.020307 2047247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34629 <nil> <nil>}
	I1216 03:54:20.020327 2047247 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-255023 && echo "no-preload-255023" | sudo tee /etc/hostname
	I1216 03:54:20.193695 2047247 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 03:54:20.193840 2047247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 03:54:20.215304 2047247 main.go:143] libmachine: Using SSH client type: native
	I1216 03:54:20.215720 2047247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34629 <nil> <nil>}
	I1216 03:54:20.215745 2047247 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-255023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-255023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-255023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:54:20.351589 2047247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:54:20.351668 2047247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 03:54:20.351709 2047247 ubuntu.go:190] setting up certificates
	I1216 03:54:20.351754 2047247 provision.go:84] configureAuth start
	I1216 03:54:20.351864 2047247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 03:54:20.369964 2047247 provision.go:143] copyHostCerts
	I1216 03:54:20.370032 2047247 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 03:54:20.370041 2047247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 03:54:20.370119 2047247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 03:54:20.370207 2047247 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 03:54:20.370212 2047247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 03:54:20.370239 2047247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 03:54:20.370296 2047247 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 03:54:20.370300 2047247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 03:54:20.370323 2047247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 03:54:20.370367 2047247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.no-preload-255023 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-255023]
	I1216 03:54:20.506306 2047247 provision.go:177] copyRemoteCerts
	I1216 03:54:20.506422 2047247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:54:20.506512 2047247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 03:54:20.526746 2047247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34629 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 03:54:20.630462 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 03:54:20.653179 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 03:54:20.672733 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:54:20.690772 2047247 provision.go:87] duration metric: took 338.974542ms to configureAuth
	I1216 03:54:20.690803 2047247 ubuntu.go:206] setting minikube options for container-runtime
	I1216 03:54:20.690996 2047247 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 03:54:20.691009 2047247 machine.go:97] duration metric: took 3.874043208s to provisionDockerMachine
	I1216 03:54:20.691017 2047247 client.go:176] duration metric: took 5.942710356s to LocalClient.Create
	I1216 03:54:20.691032 2047247 start.go:167] duration metric: took 5.942782683s to libmachine.API.Create "no-preload-255023"
	I1216 03:54:20.691131 2047247 start.go:293] postStartSetup for "no-preload-255023" (driver="docker")
	I1216 03:54:20.691143 2047247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:54:20.691196 2047247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:54:20.691248 2047247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 03:54:20.710338 2047247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34629 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 03:54:20.811633 2047247 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:54:20.815342 2047247 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 03:54:20.815374 2047247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 03:54:20.815386 2047247 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 03:54:20.815456 2047247 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 03:54:20.815558 2047247 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 03:54:20.815670 2047247 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:54:20.823306 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 03:54:20.844286 2047247 start.go:296] duration metric: took 153.138852ms for postStartSetup
	I1216 03:54:20.844659 2047247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 03:54:20.863504 2047247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 03:54:20.863888 2047247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:54:20.863961 2047247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 03:54:20.885270 2047247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34629 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 03:54:20.980637 2047247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 03:54:20.986380 2047247 start.go:128] duration metric: took 6.241951665s to createHost
	I1216 03:54:20.986446 2047247 start.go:83] releasing machines lock for "no-preload-255023", held for 6.242218334s
	I1216 03:54:20.986548 2047247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 03:54:21.013516 2047247 ssh_runner.go:195] Run: cat /version.json
	I1216 03:54:21.013572 2047247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 03:54:21.013897 2047247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:54:21.013960 2047247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 03:54:21.056940 2047247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34629 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 03:54:21.057633 2047247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34629 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 03:54:21.270478 2047247 ssh_runner.go:195] Run: systemctl --version
	I1216 03:54:21.277099 2047247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:54:21.281618 2047247 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:54:21.281693 2047247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:54:21.311173 2047247 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 03:54:21.311197 2047247 start.go:496] detecting cgroup driver to use...
	I1216 03:54:21.311257 2047247 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 03:54:21.311329 2047247 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 03:54:21.329188 2047247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 03:54:21.342182 2047247 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:54:21.342243 2047247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:54:21.362140 2047247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:54:21.384917 2047247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:54:21.516788 2047247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:54:21.685748 2047247 docker.go:234] disabling docker service ...
	I1216 03:54:21.685866 2047247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:54:21.717156 2047247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:54:21.731826 2047247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:54:21.844730 2047247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:54:21.984798 2047247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:54:21.999649 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:54:22.017939 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 03:54:22.029538 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 03:54:22.040354 2047247 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 03:54:22.040479 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 03:54:22.056283 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 03:54:22.069129 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 03:54:22.081938 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 03:54:22.093805 2047247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:54:22.105791 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 03:54:22.121022 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 03:54:22.133711 2047247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 03:54:22.144171 2047247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:54:22.153969 2047247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:54:22.163036 2047247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:54:22.289045 2047247 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 03:54:22.391432 2047247 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 03:54:22.391505 2047247 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 03:54:22.397134 2047247 start.go:564] Will wait 60s for crictl version
	I1216 03:54:22.397205 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:22.401449 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 03:54:22.427274 2047247 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 03:54:22.427361 2047247 ssh_runner.go:195] Run: containerd --version
	I1216 03:54:22.448266 2047247 ssh_runner.go:195] Run: containerd --version
	I1216 03:54:22.474568 2047247 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 03:54:22.477720 2047247 cli_runner.go:164] Run: docker network inspect no-preload-255023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 03:54:22.493243 2047247 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 03:54:22.497557 2047247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:54:22.508389 2047247 kubeadm.go:884] updating cluster {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:54:22.508513 2047247 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 03:54:22.508566 2047247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:54:22.539629 2047247 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1216 03:54:22.539652 2047247 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 03:54:22.539688 2047247 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:54:22.539884 2047247 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:22.539966 2047247 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:22.540056 2047247 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:22.540148 2047247 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:22.540227 2047247 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1216 03:54:22.540325 2047247 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:54:22.540421 2047247 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:22.541349 2047247 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:22.541798 2047247 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:22.541982 2047247 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:22.542149 2047247 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:54:22.542461 2047247 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:22.542686 2047247 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1216 03:54:22.542840 2047247 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:54:22.542996 2047247 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:22.768547 2047247 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1216 03:54:22.768627 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:22.794740 2047247 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1216 03:54:22.794826 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1216 03:54:22.794893 2047247 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1216 03:54:22.794970 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:22.798612 2047247 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1216 03:54:22.798649 2047247 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:22.798700 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:22.803347 2047247 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1216 03:54:22.803418 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:22.804104 2047247 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1216 03:54:22.804165 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1216 03:54:22.836118 2047247 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1216 03:54:22.836162 2047247 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1216 03:54:22.836211 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:22.840422 2047247 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1216 03:54:22.840532 2047247 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:22.840626 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:22.840801 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:22.845276 2047247 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1216 03:54:22.845349 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:22.849783 2047247 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1216 03:54:22.849876 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:22.857640 2047247 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1216 03:54:22.857736 2047247 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:22.857829 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:22.871441 2047247 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1216 03:54:22.871484 2047247 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1216 03:54:22.871532 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:22.871611 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:54:22.911583 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:22.911820 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:22.919394 2047247 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1216 03:54:22.919486 2047247 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:22.919571 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:22.923497 2047247 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1216 03:54:22.923588 2047247 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:22.923678 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:22.923828 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:22.923956 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:54:22.924083 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:54:22.996907 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 03:54:22.996988 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:22.997052 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:23.012080 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:23.012161 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:23.012228 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1216 03:54:23.012289 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:54:23.174061 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:23.174144 2047247 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1216 03:54:23.174221 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:54:23.174284 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 03:54:23.233507 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 03:54:23.233626 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:23.233701 2047247 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1216 03:54:23.233806 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1216 03:54:23.233890 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1216 03:54:23.256586 2047247 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1216 03:54:23.256685 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:54:23.256768 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 03:54:23.256817 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1216 03:54:23.256838 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1216 03:54:23.350177 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 03:54:23.350312 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1216 03:54:23.350334 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1216 03:54:23.350415 2047247 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1216 03:54:23.350545 2047247 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1216 03:54:23.350629 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:54:23.350549 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:54:23.361868 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1216 03:54:23.361955 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1216 03:54:23.362172 2047247 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1216 03:54:23.362323 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:54:23.418217 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1216 03:54:23.418322 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1216 03:54:23.418428 2047247 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1216 03:54:23.418559 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:54:23.418657 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1216 03:54:23.418703 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1216 03:54:23.418796 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1216 03:54:23.418852 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1216 03:54:23.438068 2047247 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1216 03:54:23.438194 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1216 03:54:23.547190 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1216 03:54:23.547227 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	W1216 03:54:23.796276 2047247 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 03:54:23.796421 2047247 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1216 03:54:23.796485 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:54:23.803239 2047247 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1216 03:54:23.933765 2047247 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1216 03:54:23.933814 2047247 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:54:23.933871 2047247 ssh_runner.go:195] Run: which crictl
	I1216 03:54:24.053035 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:54:24.179224 2047247 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:54:24.179312 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 03:54:24.256595 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:54:26.054667 2047247 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.87532792s)
	I1216 03:54:26.054693 2047247 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1216 03:54:26.054711 2047247 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:54:26.054759 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 03:54:26.054817 2047247 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.798201879s)
	I1216 03:54:26.054848 2047247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:54:27.578158 2047247 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.523289258s)
	I1216 03:54:27.578200 2047247 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 03:54:27.578283 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:54:27.578352 2047247 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.523583709s)
	I1216 03:54:27.578360 2047247 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1216 03:54:27.578374 2047247 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:54:27.578398 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1216 03:54:29.027422 2047247 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.448998712s)
	I1216 03:54:29.027452 2047247 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1216 03:54:29.027479 2047247 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:54:29.027542 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1216 03:54:29.027608 2047247 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.449313552s)
	I1216 03:54:29.027640 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 03:54:29.027660 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1216 03:54:30.684021 2047247 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.656457277s)
	I1216 03:54:30.684048 2047247 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1216 03:54:30.684067 2047247 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:54:30.684124 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 03:54:31.818849 2047247 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.134704137s)
	I1216 03:54:31.818875 2047247 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1216 03:54:31.818894 2047247 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:54:31.818955 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 03:54:33.132576 2047247 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.313599856s)
	I1216 03:54:33.132605 2047247 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1216 03:54:33.132623 2047247 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:54:33.132677 2047247 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1216 03:54:33.490555 2047247 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 03:54:33.490594 2047247 cache_images.go:125] Successfully loaded all cached images
	I1216 03:54:33.490601 2047247 cache_images.go:94] duration metric: took 10.950936603s to LoadCachedImages
	I1216 03:54:33.490615 2047247 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 03:54:33.490719 2047247 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-255023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:54:33.490790 2047247 ssh_runner.go:195] Run: sudo crictl info
	I1216 03:54:33.523022 2047247 cni.go:84] Creating CNI manager for ""
	I1216 03:54:33.523073 2047247 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 03:54:33.523094 2047247 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:54:33.523117 2047247 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-255023 NodeName:no-preload-255023 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:54:33.523236 2047247 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-255023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:54:33.523313 2047247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:54:33.534365 2047247 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1216 03:54:33.534446 2047247 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 03:54:33.549905 2047247 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1216 03:54:33.550017 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1216 03:54:33.550813 2047247 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1216 03:54:33.551176 2047247 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1216 03:54:33.555906 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1216 03:54:33.555943 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1216 03:54:34.428260 2047247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:54:34.446927 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1216 03:54:34.457548 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1216 03:54:34.457606 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1216 03:54:34.547010 2047247 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1216 03:54:34.578468 2047247 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1216 03:54:34.578667 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1216 03:54:35.124328 2047247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:54:35.134200 2047247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 03:54:35.148042 2047247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 03:54:35.163648 2047247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 03:54:35.187750 2047247 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 03:54:35.192446 2047247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:54:35.204570 2047247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:54:35.353856 2047247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:54:35.387865 2047247 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023 for IP: 192.168.85.2
	I1216 03:54:35.387889 2047247 certs.go:195] generating shared ca certs ...
	I1216 03:54:35.387906 2047247 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:54:35.388144 2047247 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 03:54:35.388234 2047247 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 03:54:35.388265 2047247 certs.go:257] generating profile certs ...
	I1216 03:54:35.388345 2047247 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.key
	I1216 03:54:35.388367 2047247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt with IP's: []
	I1216 03:54:35.772433 2047247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt ...
	I1216 03:54:35.772465 2047247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: {Name:mkabb74b8ba07a762658013c5684c8f0bf866796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:54:35.772673 2047247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.key ...
	I1216 03:54:35.772688 2047247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.key: {Name:mkf3015ffb58adabfb8a6cb8cfb8b160af4624c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:54:35.772793 2047247 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key.f898ebc5
	I1216 03:54:35.772812 2047247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt.f898ebc5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1216 03:54:36.286203 2047247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt.f898ebc5 ...
	I1216 03:54:36.286235 2047247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt.f898ebc5: {Name:mk3b253bac21ec54f3ce6d56b89ab51a0b7fd38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:54:36.286429 2047247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key.f898ebc5 ...
	I1216 03:54:36.286444 2047247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key.f898ebc5: {Name:mk24778cb4e45d59c5747c41aa2f52b34fced4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:54:36.286538 2047247 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt.f898ebc5 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt
	I1216 03:54:36.286618 2047247 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key.f898ebc5 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key
	I1216 03:54:36.286678 2047247 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key
	I1216 03:54:36.286699 2047247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.crt with IP's: []
	I1216 03:54:37.372771 2047247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.crt ...
	I1216 03:54:37.372808 2047247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.crt: {Name:mk515d84ce017c515eb8f9b4ea0e2494cd842f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:54:37.373008 2047247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key ...
	I1216 03:54:37.373025 2047247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key: {Name:mkee431c10522fa5a766188ec4328cff3e3bf011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:54:37.373226 2047247 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 03:54:37.373274 2047247 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 03:54:37.373289 2047247 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 03:54:37.373319 2047247 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 03:54:37.373348 2047247 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:54:37.373374 2047247 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 03:54:37.373456 2047247 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 03:54:37.374009 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:54:37.393305 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 03:54:37.415292 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:54:37.435960 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:54:37.453255 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 03:54:37.471214 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:54:37.491416 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:54:37.510642 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 03:54:37.531276 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:54:37.553323 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 03:54:37.573330 2047247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 03:54:37.592925 2047247 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:54:37.606527 2047247 ssh_runner.go:195] Run: openssl version
	I1216 03:54:37.613026 2047247 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:54:37.621420 2047247 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:54:37.629383 2047247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:54:37.633685 2047247 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:54:37.633808 2047247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:54:37.676183 2047247 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:54:37.687353 2047247 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:54:37.698667 2047247 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 03:54:37.708058 2047247 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 03:54:37.717905 2047247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 03:54:37.723768 2047247 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 03:54:37.723858 2047247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 03:54:37.772824 2047247 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:54:37.781507 2047247 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 03:54:37.789732 2047247 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 03:54:37.797867 2047247 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 03:54:37.806679 2047247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 03:54:37.811221 2047247 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 03:54:37.811293 2047247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 03:54:37.853056 2047247 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:54:37.861455 2047247 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:54:37.869559 2047247 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:54:37.873897 2047247 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:54:37.873953 2047247 kubeadm.go:401] StartCluster: {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:54:37.874028 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 03:54:37.874090 2047247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:54:37.902175 2047247 cri.go:89] found id: ""
	I1216 03:54:37.902299 2047247 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:54:37.910818 2047247 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:54:37.919193 2047247 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:54:37.919304 2047247 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:54:37.927871 2047247 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:54:37.927902 2047247 kubeadm.go:158] found existing configuration files:
	
	I1216 03:54:37.927999 2047247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:54:37.936201 2047247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:54:37.936292 2047247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:54:37.944038 2047247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:54:37.952575 2047247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:54:37.952694 2047247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:54:37.960803 2047247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:54:37.969900 2047247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:54:37.970009 2047247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:54:37.978003 2047247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:54:37.986498 2047247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:54:37.986622 2047247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:54:37.994727 2047247 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:54:38.048912 2047247 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:54:38.049211 2047247 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:54:38.125459 2047247 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:54:38.125571 2047247 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:54:38.125640 2047247 kubeadm.go:319] OS: Linux
	I1216 03:54:38.125714 2047247 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:54:38.125785 2047247 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:54:38.125859 2047247 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:54:38.125931 2047247 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:54:38.126007 2047247 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:54:38.126082 2047247 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:54:38.126151 2047247 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:54:38.126222 2047247 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:54:38.126290 2047247 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:54:38.195222 2047247 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:54:38.195382 2047247 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:54:38.195523 2047247 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:54:38.201212 2047247 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:54:38.209409 2047247 out.go:252]   - Generating certificates and keys ...
	I1216 03:54:38.209576 2047247 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:54:38.209674 2047247 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:54:38.536242 2047247 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:54:38.813279 2047247 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:54:38.889660 2047247 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:54:39.324716 2047247 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:54:39.394325 2047247 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:54:39.394861 2047247 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-255023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 03:54:39.705168 2047247 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:54:39.705515 2047247 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-255023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 03:54:39.857144 2047247 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:54:40.078912 2047247 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:54:40.345250 2047247 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:54:40.345773 2047247 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:54:40.705823 2047247 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:54:41.236295 2047247 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:54:41.759369 2047247 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:54:41.849064 2047247 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:54:42.133472 2047247 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:54:42.134689 2047247 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:54:42.139841 2047247 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:54:42.143918 2047247 out.go:252]   - Booting up control plane ...
	I1216 03:54:42.144147 2047247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:54:42.158878 2047247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:54:42.161420 2047247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:54:42.191842 2047247 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:54:42.191960 2047247 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:54:42.208386 2047247 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:54:42.208503 2047247 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:54:42.208575 2047247 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:54:42.414275 2047247 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:54:42.414417 2047247 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:58:42.417722 2047247 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000380416s
	I1216 03:58:42.417750 2047247 kubeadm.go:319] 
	I1216 03:58:42.417804 2047247 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 03:58:42.417836 2047247 kubeadm.go:319] 	- The kubelet is not running
	I1216 03:58:42.417934 2047247 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 03:58:42.417939 2047247 kubeadm.go:319] 
	I1216 03:58:42.418037 2047247 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 03:58:42.418067 2047247 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 03:58:42.418096 2047247 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 03:58:42.418100 2047247 kubeadm.go:319] 
	I1216 03:58:42.423926 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 03:58:42.424569 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 03:58:42.424687 2047247 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 03:58:42.424953 2047247 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 03:58:42.424958 2047247 kubeadm.go:319] 
	W1216 03:58:42.425131 2047247 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-255023] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-255023] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000380416s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-255023] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-255023] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000380416s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 03:58:42.425221 2047247 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 03:58:42.425648 2047247 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 03:58:42.888365 2047247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:58:42.926208 2047247 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 03:58:42.926287 2047247 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:58:42.947379 2047247 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:58:42.947401 2047247 kubeadm.go:158] found existing configuration files:
	
	I1216 03:58:42.947474 2047247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:58:42.960180 2047247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:58:42.960255 2047247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:58:42.973402 2047247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:58:42.991629 2047247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:58:42.991723 2047247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:58:43.012930 2047247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:58:43.026177 2047247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:58:43.026245 2047247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:58:43.036148 2047247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:58:43.050270 2047247 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:58:43.050356 2047247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:58:43.070657 2047247 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 03:58:43.117290 2047247 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 03:58:43.117635 2047247 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:58:43.206337 2047247 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 03:58:43.206415 2047247 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 03:58:43.206450 2047247 kubeadm.go:319] OS: Linux
	I1216 03:58:43.206562 2047247 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 03:58:43.206810 2047247 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 03:58:43.206905 2047247 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 03:58:43.207024 2047247 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 03:58:43.207171 2047247 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 03:58:43.207233 2047247 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 03:58:43.207277 2047247 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 03:58:43.207358 2047247 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 03:58:43.207411 2047247 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 03:58:43.306971 2047247 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:58:43.307135 2047247 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:58:43.307230 2047247 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:58:43.315478 2047247 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:58:43.319319 2047247 out.go:252]   - Generating certificates and keys ...
	I1216 03:58:43.319416 2047247 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:58:43.319485 2047247 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:58:43.319564 2047247 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 03:58:43.319634 2047247 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 03:58:43.319707 2047247 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 03:58:43.319762 2047247 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 03:58:43.319828 2047247 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 03:58:43.319891 2047247 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 03:58:43.319964 2047247 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 03:58:43.320036 2047247 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 03:58:43.320075 2047247 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 03:58:43.320131 2047247 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:58:44.024268 2047247 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:58:44.075088 2047247 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:58:44.190268 2047247 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:58:44.405296 2047247 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:58:44.575264 2047247 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:58:44.575364 2047247 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:58:44.579709 2047247 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:58:44.584834 2047247 out.go:252]   - Booting up control plane ...
	I1216 03:58:44.584943 2047247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:58:44.585023 2047247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:58:44.589789 2047247 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:58:44.617802 2047247 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:58:44.617912 2047247 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:58:44.626323 2047247 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:58:44.626419 2047247 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:58:44.626466 2047247 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:58:44.808287 2047247 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:58:44.808408 2047247 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:02:44.807737 2047247 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00116118s
	I1216 04:02:44.807769 2047247 kubeadm.go:319] 
	I1216 04:02:44.807828 2047247 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:02:44.807861 2047247 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:02:44.808332 2047247 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:02:44.808350 2047247 kubeadm.go:319] 
	I1216 04:02:44.808601 2047247 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:02:44.808660 2047247 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:02:44.809013 2047247 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:02:44.809031 2047247 kubeadm.go:319] 
	I1216 04:02:44.815240 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:02:44.815746 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:02:44.815895 2047247 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:02:44.816168 2047247 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:02:44.816181 2047247 kubeadm.go:319] 
	I1216 04:02:44.816298 2047247 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:02:44.816332 2047247 kubeadm.go:403] duration metric: took 8m6.942382888s to StartCluster
	I1216 04:02:44.816370 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:02:44.816433 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:02:44.842115 2047247 cri.go:89] found id: ""
	I1216 04:02:44.842201 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.842224 2047247 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:02:44.842244 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:02:44.842323 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:02:44.871535 2047247 cri.go:89] found id: ""
	I1216 04:02:44.871561 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.871575 2047247 logs.go:284] No container was found matching "etcd"
	I1216 04:02:44.871582 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:02:44.871639 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:02:44.895425 2047247 cri.go:89] found id: ""
	I1216 04:02:44.895448 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.895456 2047247 logs.go:284] No container was found matching "coredns"
	I1216 04:02:44.895462 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:02:44.895526 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:02:44.919895 2047247 cri.go:89] found id: ""
	I1216 04:02:44.919921 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.919930 2047247 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:02:44.919937 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:02:44.920004 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:02:44.950798 2047247 cri.go:89] found id: ""
	I1216 04:02:44.950826 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.950835 2047247 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:02:44.950841 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:02:44.950901 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:02:44.984134 2047247 cri.go:89] found id: ""
	I1216 04:02:44.984161 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.984170 2047247 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:02:44.984177 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:02:44.984238 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:02:45.019779 2047247 cri.go:89] found id: ""
	I1216 04:02:45.019872 2047247 logs.go:282] 0 containers: []
	W1216 04:02:45.019899 2047247 logs.go:284] No container was found matching "kindnet"
	I1216 04:02:45.019923 2047247 logs.go:123] Gathering logs for container status ...
	I1216 04:02:45.019972 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:02:45.069300 2047247 logs.go:123] Gathering logs for kubelet ...
	I1216 04:02:45.069346 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:02:45.143191 2047247 logs.go:123] Gathering logs for dmesg ...
	I1216 04:02:45.143236 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:02:45.166359 2047247 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:02:45.166399 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:02:45.288271 2047247 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:02:45.288296 2047247 logs.go:123] Gathering logs for containerd ...
	I1216 04:02:45.288311 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1216 04:02:45.336518 2047247 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:02:45.336599 2047247 out.go:285] * 
	* 
	W1216 04:02:45.336658 2047247 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.336677 2047247 out.go:285] * 
	* 
	W1216 04:02:45.341134 2047247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:02:45.347800 2047247 out.go:203] 
	W1216 04:02:45.350811 2047247 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.351828 2047247 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:02:45.351862 2047247 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:02:45.356103 2047247 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-255023
helpers_test.go:244: (dbg) docker inspect no-preload-255023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	        "Created": "2025-12-16T03:54:15.810217174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2047579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:54:15.877443945Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e-json.log",
	        "Name": "/no-preload-255023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-255023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-255023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	                "LowerDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-255023",
	                "Source": "/var/lib/docker/volumes/no-preload-255023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-255023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-255023",
	                "name.minikube.sigs.k8s.io": "no-preload-255023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "256e4f9aa86f99f79faacaa868cdf31f4b2fc13a757dc64960cd771c6c4ff8b0",
	            "SandboxKey": "/var/run/docker/netns/256e4f9aa86f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34629"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34630"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34633"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34631"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34632"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-255023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:22:4d:72:1b:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba784dbb0bf675265a222a2ccbfc260249ee6464ab188d5ef5e9194204ab459f",
	                    "EndpointID": "bb7e6178d0c584a363e69f7c998efcccf04a6debdd8cca59ecd1f85a3daebffe",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-255023",
	                        "9e19dbb9154c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023: exit status 6 (317.922106ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:02:45.808983 2075742 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-255023 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-580645 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ delete  │ -p old-k8s-version-580645                                                                                                                                                                                                                                  │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ delete  │ -p old-k8s-version-580645                                                                                                                                                                                                                                  │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-092028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ stop    │ -p embed-certs-092028 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-092028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:58 UTC │
	│ image   │ embed-certs-092028 image list --format=json                                                                                                                                                                                                                │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ pause   │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ unpause │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:01:40
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:01:40.358627 2073073 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:01:40.358771 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.358784 2073073 out.go:374] Setting ErrFile to fd 2...
	I1216 04:01:40.358790 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.359119 2073073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:01:40.359639 2073073 out.go:368] Setting JSON to false
	I1216 04:01:40.360571 2073073 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35045,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:01:40.360643 2073073 start.go:143] virtualization:  
	I1216 04:01:40.364536 2073073 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:01:40.367700 2073073 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:01:40.367780 2073073 notify.go:221] Checking for updates...
	I1216 04:01:40.374179 2073073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:01:40.377177 2073073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:01:40.380122 2073073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:01:40.382984 2073073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:01:40.385825 2073073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:01:40.389346 2073073 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:40.389442 2073073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:01:40.423035 2073073 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:01:40.423253 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.478281 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.468443485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.478389 2073073 docker.go:319] overlay module found
	I1216 04:01:40.481589 2073073 out.go:179] * Using the docker driver based on user configuration
	I1216 04:01:40.484342 2073073 start.go:309] selected driver: docker
	I1216 04:01:40.484360 2073073 start.go:927] validating driver "docker" against <nil>
	I1216 04:01:40.484390 2073073 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:01:40.485138 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.540618 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.531037075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.540793 2073073 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1216 04:01:40.540832 2073073 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 04:01:40.541056 2073073 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:01:40.544109 2073073 out.go:179] * Using Docker driver with root privileges
	I1216 04:01:40.546924 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:40.547001 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:40.547019 2073073 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:01:40.547159 2073073 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:40.552090 2073073 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:01:40.554928 2073073 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:01:40.557867 2073073 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:01:40.560695 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:40.560741 2073073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:01:40.560753 2073073 cache.go:65] Caching tarball of preloaded images
	I1216 04:01:40.560789 2073073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:01:40.560857 2073073 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:01:40.560868 2073073 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:01:40.560979 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:40.560997 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json: {Name:mkec760556e6c51ee205092e94b87aaba5f75b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:40.580559 2073073 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:01:40.580583 2073073 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:01:40.580603 2073073 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:01:40.580637 2073073 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:01:40.580748 2073073 start.go:364] duration metric: took 89.631µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:01:40.580779 2073073 start.go:93] Provisioning new machine with config: &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:01:40.580854 2073073 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:01:40.584420 2073073 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 04:01:40.584656 2073073 start.go:159] libmachine.API.Create for "newest-cni-450938" (driver="docker")
	I1216 04:01:40.584695 2073073 client.go:173] LocalClient.Create starting
	I1216 04:01:40.584764 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 04:01:40.584813 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584835 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.584892 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 04:01:40.584915 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584931 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.585306 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:01:40.601358 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:01:40.601446 2073073 network_create.go:284] running [docker network inspect newest-cni-450938] to gather additional debugging logs...
	I1216 04:01:40.601465 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938
	W1216 04:01:40.616984 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 returned with exit code 1
	I1216 04:01:40.617014 2073073 network_create.go:287] error running [docker network inspect newest-cni-450938]: docker network inspect newest-cni-450938: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-450938 not found
	I1216 04:01:40.617029 2073073 network_create.go:289] output of [docker network inspect newest-cni-450938]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-450938 not found
	
	** /stderr **
	I1216 04:01:40.617127 2073073 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:40.633949 2073073 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
	I1216 04:01:40.634331 2073073 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d705cdcdbc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:12:e3:47:7f:d3} reservation:<nil>}
	I1216 04:01:40.634582 2073073 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9eafaf3b4a19 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e2:6e:50:29:6c:d7} reservation:<nil>}
	I1216 04:01:40.635035 2073073 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a049e0}
	I1216 04:01:40.635083 2073073 network_create.go:124] attempt to create docker network newest-cni-450938 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 04:01:40.635147 2073073 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-450938 newest-cni-450938
	I1216 04:01:40.694811 2073073 network_create.go:108] docker network newest-cni-450938 192.168.76.0/24 created
	I1216 04:01:40.694847 2073073 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-450938" container
	I1216 04:01:40.694937 2073073 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:01:40.711581 2073073 cli_runner.go:164] Run: docker volume create newest-cni-450938 --label name.minikube.sigs.k8s.io=newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:01:40.729628 2073073 oci.go:103] Successfully created a docker volume newest-cni-450938
	I1216 04:01:40.729716 2073073 cli_runner.go:164] Run: docker run --rm --name newest-cni-450938-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --entrypoint /usr/bin/test -v newest-cni-450938:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:01:41.281285 2073073 oci.go:107] Successfully prepared a docker volume newest-cni-450938
	I1216 04:01:41.281356 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:41.281367 2073073 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:01:41.281445 2073073 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:01:45.222137 2073073 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.940644385s)
	I1216 04:01:45.222178 2073073 kic.go:203] duration metric: took 3.94080544s to extract preloaded images to volume ...
	W1216 04:01:45.222367 2073073 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:01:45.222487 2073073 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:01:45.304396 2073073 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-450938 --name newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-450938 --network newest-cni-450938 --ip 192.168.76.2 --volume newest-cni-450938:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:01:45.622538 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Running}}
	I1216 04:01:45.645583 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:45.670160 2073073 cli_runner.go:164] Run: docker exec newest-cni-450938 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:01:45.723975 2073073 oci.go:144] the created container "newest-cni-450938" has a running status.
	I1216 04:01:45.724003 2073073 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa...
	I1216 04:01:46.267889 2073073 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:01:46.287596 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.304458 2073073 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:01:46.304481 2073073 kic_runner.go:114] Args: [docker exec --privileged newest-cni-450938 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:01:46.342352 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.358427 2073073 machine.go:94] provisionDockerMachine start ...
	I1216 04:01:46.358587 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:46.375890 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:46.376242 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:46.376257 2073073 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:01:46.376910 2073073 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:01:49.515224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.515347 2073073 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:01:49.515465 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.535848 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.536182 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.536201 2073073 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:01:49.686017 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.686121 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.708351 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.708676 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.708700 2073073 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:01:49.847224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:01:49.847257 2073073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:01:49.847275 2073073 ubuntu.go:190] setting up certificates
	I1216 04:01:49.847284 2073073 provision.go:84] configureAuth start
	I1216 04:01:49.847343 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:49.866142 2073073 provision.go:143] copyHostCerts
	I1216 04:01:49.866218 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:01:49.866228 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:01:49.866302 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:01:49.866395 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:01:49.866400 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:01:49.866426 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:01:49.866481 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:01:49.866486 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:01:49.866507 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:01:49.866552 2073073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:01:50.260935 2073073 provision.go:177] copyRemoteCerts
	I1216 04:01:50.261010 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:01:50.261061 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.278254 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.374622 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:01:50.392129 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:01:50.409364 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:01:50.427428 2073073 provision.go:87] duration metric: took 580.130211ms to configureAuth
	I1216 04:01:50.427478 2073073 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:01:50.427668 2073073 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:50.427681 2073073 machine.go:97] duration metric: took 4.069230888s to provisionDockerMachine
	I1216 04:01:50.427689 2073073 client.go:176] duration metric: took 9.842984311s to LocalClient.Create
	I1216 04:01:50.427703 2073073 start.go:167] duration metric: took 9.843048588s to libmachine.API.Create "newest-cni-450938"
	I1216 04:01:50.427714 2073073 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:01:50.427724 2073073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:01:50.427814 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:01:50.427858 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.444571 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.543256 2073073 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:01:50.546463 2073073 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:01:50.546490 2073073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:01:50.546502 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:01:50.546555 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:01:50.546641 2073073 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:01:50.546744 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:01:50.554130 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:50.571399 2073073 start.go:296] duration metric: took 143.669232ms for postStartSetup
	I1216 04:01:50.571809 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.589075 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:50.589367 2073073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:01:50.589424 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.606538 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.701772 2073073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:01:50.711675 2073073 start.go:128] duration metric: took 10.130806483s to createHost
	I1216 04:01:50.711705 2073073 start.go:83] releasing machines lock for "newest-cni-450938", held for 10.130943333s
	I1216 04:01:50.711776 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.732853 2073073 ssh_runner.go:195] Run: cat /version.json
	I1216 04:01:50.732921 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.733181 2073073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:01:50.733238 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.768130 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.773572 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.871263 2073073 ssh_runner.go:195] Run: systemctl --version
	I1216 04:01:50.965373 2073073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:01:50.969981 2073073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:01:50.970086 2073073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:01:51.000188 2073073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:01:51.000222 2073073 start.go:496] detecting cgroup driver to use...
	I1216 04:01:51.000256 2073073 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:01:51.000314 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:01:51.019286 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:01:51.033299 2073073 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:01:51.033403 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:01:51.051418 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:01:51.070273 2073073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:01:51.194121 2073073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:01:51.311613 2073073 docker.go:234] disabling docker service ...
	I1216 04:01:51.311729 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:01:51.333815 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:01:51.346480 2073073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:01:51.470333 2073073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:01:51.603299 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:01:51.616625 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:01:51.630599 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:01:51.640005 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:01:51.649178 2073073 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:01:51.649257 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:01:51.658373 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.667673 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:01:51.676660 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.685480 2073073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:01:51.694285 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:01:51.703488 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:01:51.712372 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:01:51.721367 2073073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:01:51.729097 2073073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:01:51.736893 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:51.844375 2073073 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:01:51.993981 2073073 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:01:51.994107 2073073 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:01:51.998351 2073073 start.go:564] Will wait 60s for crictl version
	I1216 04:01:51.998465 2073073 ssh_runner.go:195] Run: which crictl
	I1216 04:01:52.005463 2073073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:01:52.032896 2073073 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:01:52.032981 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.059717 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.085644 2073073 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:01:52.088617 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:52.106258 2073073 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:01:52.110161 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.122893 2073073 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:01:52.125844 2073073 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:01:52.126001 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:52.126091 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.152470 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.152498 2073073 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:01:52.152563 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.176896 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.176919 2073073 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:01:52.176928 2073073 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:01:52.177016 2073073 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:01:52.177086 2073073 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:01:52.218042 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:52.218071 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:52.218119 2073073 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:01:52.218150 2073073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:01:52.218321 2073073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:01:52.218398 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:01:52.230127 2073073 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:01:52.230208 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:01:52.239812 2073073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:01:52.255679 2073073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:01:52.270419 2073073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:01:52.284034 2073073 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:01:52.287803 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.297256 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:52.412361 2073073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:01:52.428888 2073073 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:01:52.428959 2073073 certs.go:195] generating shared ca certs ...
	I1216 04:01:52.428991 2073073 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.429192 2073073 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:01:52.429285 2073073 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:01:52.429319 2073073 certs.go:257] generating profile certs ...
	I1216 04:01:52.429409 2073073 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:01:52.429451 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt with IP's: []
	I1216 04:01:52.591834 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt ...
	I1216 04:01:52.591928 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt: {Name:mk7778fd64a4e46926332e38f467016f166dd4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592375 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key ...
	I1216 04:01:52.592423 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key: {Name:mk64ab6c72a270d4e474bc857c4508cc11c704c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592850 2073073 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:01:52.592903 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 04:01:52.672242 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c ...
	I1216 04:01:52.672287 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c: {Name:mk3c094233344d156b233623b9dbfae4496ab12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672537 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c ...
	I1216 04:01:52.672554 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c: {Name:mke958b63de0c9e687b9653a66eec1e3497a17af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672658 2073073 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt
	I1216 04:01:52.672758 2073073 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key
	I1216 04:01:52.672837 2073073 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:01:52.672864 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt with IP's: []
	I1216 04:01:53.025120 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt ...
	I1216 04:01:53.025154 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt: {Name:mkca565dc28355ccf88123a839d9cc0986e3f757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025346 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key ...
	I1216 04:01:53.025361 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key: {Name:mkb5acdd577d99db642b84842da90293bb2494a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025563 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:01:53.025610 2073073 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:01:53.025625 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:01:53.025652 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:01:53.025681 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:01:53.025711 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:01:53.025764 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:53.026345 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:01:53.047604 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:01:53.067106 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:01:53.086400 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:01:53.106958 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:01:53.125852 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:01:53.144046 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:01:53.162443 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:01:53.180617 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:01:53.202914 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:01:53.228308 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:01:53.254030 2073073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:01:53.268427 2073073 ssh_runner.go:195] Run: openssl version
	I1216 04:01:53.275148 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.283060 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:01:53.291347 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295430 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295543 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.338110 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:01:53.345692 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 04:01:53.354101 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.361981 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:01:53.369807 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.373913 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.374034 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.415192 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.422756 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.430342 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.438151 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:01:53.446120 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450114 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450180 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.491333 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:01:53.498914 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:01:53.506771 2073073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:01:53.510538 2073073 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:01:53.510593 2073073 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:53.510681 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:01:53.510746 2073073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:01:53.537047 2073073 cri.go:89] found id: ""
	I1216 04:01:53.537176 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:01:53.545264 2073073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:01:53.553401 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:01:53.553502 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:01:53.561504 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:01:53.561527 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:01:53.561581 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:01:53.569732 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:01:53.569844 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:01:53.577622 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:01:53.585671 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:01:53.585743 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:01:53.593272 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.601710 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:01:53.601791 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.609698 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:01:53.617871 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:01:53.617953 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:01:53.625500 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:01:53.665591 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:01:53.665653 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:01:53.769108 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:01:53.769186 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:01:53.769232 2073073 kubeadm.go:319] OS: Linux
	I1216 04:01:53.769281 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:01:53.769333 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:01:53.769384 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:01:53.769436 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:01:53.769489 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:01:53.769544 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:01:53.769592 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:01:53.769644 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:01:53.769694 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:01:53.843812 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:01:53.843931 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:01:53.844032 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:01:53.849932 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:01:53.856727 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:01:53.856901 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:01:53.857012 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:01:54.280084 2073073 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:01:54.512481 2073073 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:01:55.160883 2073073 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:01:55.382188 2073073 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:01:55.675582 2073073 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:01:55.675752 2073073 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:55.934138 2073073 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:01:55.934424 2073073 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:56.047522 2073073 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:01:56.247778 2073073 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:01:56.462583 2073073 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:01:56.462916 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:01:56.695545 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:01:56.807074 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:01:56.888027 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:01:57.401338 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:01:57.476073 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:01:57.476371 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:01:57.480701 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:01:57.484671 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:01:57.484788 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:01:57.484862 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:01:57.485264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:01:57.510728 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:01:57.510840 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:01:57.520316 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:01:57.521983 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:01:57.522283 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:01:57.655090 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:01:57.655217 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:02:44.807737 2047247 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00116118s
	I1216 04:02:44.807769 2047247 kubeadm.go:319] 
	I1216 04:02:44.807828 2047247 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:02:44.807861 2047247 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:02:44.808332 2047247 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:02:44.808350 2047247 kubeadm.go:319] 
	I1216 04:02:44.808601 2047247 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:02:44.808660 2047247 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:02:44.809013 2047247 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:02:44.809031 2047247 kubeadm.go:319] 
	I1216 04:02:44.815240 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:02:44.815746 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:02:44.815895 2047247 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:02:44.816168 2047247 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:02:44.816181 2047247 kubeadm.go:319] 
	I1216 04:02:44.816298 2047247 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:02:44.816332 2047247 kubeadm.go:403] duration metric: took 8m6.942382888s to StartCluster
	I1216 04:02:44.816370 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:02:44.816433 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:02:44.842115 2047247 cri.go:89] found id: ""
	I1216 04:02:44.842201 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.842224 2047247 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:02:44.842244 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:02:44.842323 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:02:44.871535 2047247 cri.go:89] found id: ""
	I1216 04:02:44.871561 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.871575 2047247 logs.go:284] No container was found matching "etcd"
	I1216 04:02:44.871582 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:02:44.871639 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:02:44.895425 2047247 cri.go:89] found id: ""
	I1216 04:02:44.895448 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.895456 2047247 logs.go:284] No container was found matching "coredns"
	I1216 04:02:44.895462 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:02:44.895526 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:02:44.919895 2047247 cri.go:89] found id: ""
	I1216 04:02:44.919921 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.919930 2047247 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:02:44.919937 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:02:44.920004 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:02:44.950798 2047247 cri.go:89] found id: ""
	I1216 04:02:44.950826 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.950835 2047247 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:02:44.950841 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:02:44.950901 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:02:44.984134 2047247 cri.go:89] found id: ""
	I1216 04:02:44.984161 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.984170 2047247 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:02:44.984177 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:02:44.984238 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:02:45.019779 2047247 cri.go:89] found id: ""
	I1216 04:02:45.019872 2047247 logs.go:282] 0 containers: []
	W1216 04:02:45.019899 2047247 logs.go:284] No container was found matching "kindnet"
	I1216 04:02:45.019923 2047247 logs.go:123] Gathering logs for container status ...
	I1216 04:02:45.019972 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:02:45.069300 2047247 logs.go:123] Gathering logs for kubelet ...
	I1216 04:02:45.069346 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:02:45.143191 2047247 logs.go:123] Gathering logs for dmesg ...
	I1216 04:02:45.143236 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:02:45.166359 2047247 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:02:45.166399 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:02:45.288271 2047247 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:02:45.288296 2047247 logs.go:123] Gathering logs for containerd ...
	I1216 04:02:45.288311 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1216 04:02:45.336518 2047247 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:02:45.336599 2047247 out.go:285] * 
	W1216 04:02:45.336658 2047247 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.336677 2047247 out.go:285] * 
	W1216 04:02:45.341134 2047247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:02:45.347800 2047247 out.go:203] 
	W1216 04:02:45.350811 2047247 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.351828 2047247 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:02:45.351862 2047247 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:02:45.356103 2047247 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 03:54:26 no-preload-255023 containerd[758]: time="2025-12-16T03:54:26.067986383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.567349921Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.569948194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.588851575Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.596155920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.015464986Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.018332545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.026775300Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.027787428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.675775645Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.676978497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.686002604Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.686924289Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.808092869Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.810368902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.821471844Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.822069729Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.124780620Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.126168452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.131232825Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.133433035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.474548837Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.476933667Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.486012944Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.486755540Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:02:46.441851    5567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:46.442699    5567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:46.444294    5567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:46.444597    5567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:46.446059    5567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:02:46 up  9:45,  0 user,  load average: 1.13, 1.85, 2.04
	Linux no-preload-255023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:02:43 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:43 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 16 04:02:43 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:43 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:43 no-preload-255023 kubelet[5372]: E1216 04:02:43.982965    5372 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:43 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:43 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:44 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 16 04:02:44 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:44 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:44 no-preload-255023 kubelet[5378]: E1216 04:02:44.733580    5378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:44 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:44 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:45 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 04:02:45 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:45 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:45 no-preload-255023 kubelet[5462]: E1216 04:02:45.498243    5462 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:45 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:45 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:46 no-preload-255023 kubelet[5510]: E1216 04:02:46.259576    5510 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 6 (301.549134ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:02:46.859025 2075967 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (512.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (502.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1216 04:01:54.221652 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:02:28.303222 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.709041375s)

                                                
                                                
-- stdout --
	* [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:01:40.358627 2073073 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:01:40.358771 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.358784 2073073 out.go:374] Setting ErrFile to fd 2...
	I1216 04:01:40.358790 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.359119 2073073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:01:40.359639 2073073 out.go:368] Setting JSON to false
	I1216 04:01:40.360571 2073073 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35045,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:01:40.360643 2073073 start.go:143] virtualization:  
	I1216 04:01:40.364536 2073073 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:01:40.367700 2073073 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:01:40.367780 2073073 notify.go:221] Checking for updates...
	I1216 04:01:40.374179 2073073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:01:40.377177 2073073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:01:40.380122 2073073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:01:40.382984 2073073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:01:40.385825 2073073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:01:40.389346 2073073 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:40.389442 2073073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:01:40.423035 2073073 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:01:40.423253 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.478281 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.468443485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.478389 2073073 docker.go:319] overlay module found
	I1216 04:01:40.481589 2073073 out.go:179] * Using the docker driver based on user configuration
	I1216 04:01:40.484342 2073073 start.go:309] selected driver: docker
	I1216 04:01:40.484360 2073073 start.go:927] validating driver "docker" against <nil>
	I1216 04:01:40.484390 2073073 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:01:40.485138 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.540618 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.531037075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.540793 2073073 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1216 04:01:40.540832 2073073 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 04:01:40.541056 2073073 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:01:40.544109 2073073 out.go:179] * Using Docker driver with root privileges
	I1216 04:01:40.546924 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:40.547001 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:40.547019 2073073 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:01:40.547159 2073073 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:40.552090 2073073 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:01:40.554928 2073073 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:01:40.557867 2073073 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:01:40.560695 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:40.560741 2073073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:01:40.560753 2073073 cache.go:65] Caching tarball of preloaded images
	I1216 04:01:40.560789 2073073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:01:40.560857 2073073 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:01:40.560868 2073073 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:01:40.560979 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:40.560997 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json: {Name:mkec760556e6c51ee205092e94b87aaba5f75b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:40.580559 2073073 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:01:40.580583 2073073 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:01:40.580603 2073073 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:01:40.580637 2073073 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:01:40.580748 2073073 start.go:364] duration metric: took 89.631µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:01:40.580779 2073073 start.go:93] Provisioning new machine with config: &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:01:40.580854 2073073 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:01:40.584420 2073073 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 04:01:40.584656 2073073 start.go:159] libmachine.API.Create for "newest-cni-450938" (driver="docker")
	I1216 04:01:40.584695 2073073 client.go:173] LocalClient.Create starting
	I1216 04:01:40.584764 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 04:01:40.584813 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584835 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.584892 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 04:01:40.584915 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584931 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.585306 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:01:40.601358 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:01:40.601446 2073073 network_create.go:284] running [docker network inspect newest-cni-450938] to gather additional debugging logs...
	I1216 04:01:40.601465 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938
	W1216 04:01:40.616984 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 returned with exit code 1
	I1216 04:01:40.617014 2073073 network_create.go:287] error running [docker network inspect newest-cni-450938]: docker network inspect newest-cni-450938: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-450938 not found
	I1216 04:01:40.617029 2073073 network_create.go:289] output of [docker network inspect newest-cni-450938]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-450938 not found
	
	** /stderr **
	I1216 04:01:40.617127 2073073 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:40.633949 2073073 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
	I1216 04:01:40.634331 2073073 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d705cdcdbc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:12:e3:47:7f:d3} reservation:<nil>}
	I1216 04:01:40.634582 2073073 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9eafaf3b4a19 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e2:6e:50:29:6c:d7} reservation:<nil>}
	I1216 04:01:40.635035 2073073 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a049e0}
	I1216 04:01:40.635083 2073073 network_create.go:124] attempt to create docker network newest-cni-450938 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 04:01:40.635147 2073073 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-450938 newest-cni-450938
	I1216 04:01:40.694811 2073073 network_create.go:108] docker network newest-cni-450938 192.168.76.0/24 created
	I1216 04:01:40.694847 2073073 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-450938" container
	I1216 04:01:40.694937 2073073 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:01:40.711581 2073073 cli_runner.go:164] Run: docker volume create newest-cni-450938 --label name.minikube.sigs.k8s.io=newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:01:40.729628 2073073 oci.go:103] Successfully created a docker volume newest-cni-450938
	I1216 04:01:40.729716 2073073 cli_runner.go:164] Run: docker run --rm --name newest-cni-450938-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --entrypoint /usr/bin/test -v newest-cni-450938:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:01:41.281285 2073073 oci.go:107] Successfully prepared a docker volume newest-cni-450938
	I1216 04:01:41.281356 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:41.281367 2073073 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:01:41.281445 2073073 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:01:45.222137 2073073 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.940644385s)
	I1216 04:01:45.222178 2073073 kic.go:203] duration metric: took 3.94080544s to extract preloaded images to volume ...
	W1216 04:01:45.222367 2073073 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:01:45.222487 2073073 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:01:45.304396 2073073 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-450938 --name newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-450938 --network newest-cni-450938 --ip 192.168.76.2 --volume newest-cni-450938:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:01:45.622538 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Running}}
	I1216 04:01:45.645583 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:45.670160 2073073 cli_runner.go:164] Run: docker exec newest-cni-450938 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:01:45.723975 2073073 oci.go:144] the created container "newest-cni-450938" has a running status.
	I1216 04:01:45.724003 2073073 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa...
	I1216 04:01:46.267889 2073073 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:01:46.287596 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.304458 2073073 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:01:46.304481 2073073 kic_runner.go:114] Args: [docker exec --privileged newest-cni-450938 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:01:46.342352 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.358427 2073073 machine.go:94] provisionDockerMachine start ...
	I1216 04:01:46.358587 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:46.375890 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:46.376242 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:46.376257 2073073 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:01:46.376910 2073073 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:01:49.515224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.515347 2073073 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:01:49.515465 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.535848 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.536182 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.536201 2073073 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:01:49.686017 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.686121 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.708351 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.708676 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.708700 2073073 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:01:49.847224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:01:49.847257 2073073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:01:49.847275 2073073 ubuntu.go:190] setting up certificates
	I1216 04:01:49.847284 2073073 provision.go:84] configureAuth start
	I1216 04:01:49.847343 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:49.866142 2073073 provision.go:143] copyHostCerts
	I1216 04:01:49.866218 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:01:49.866228 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:01:49.866302 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:01:49.866395 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:01:49.866400 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:01:49.866426 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:01:49.866481 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:01:49.866486 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:01:49.866507 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:01:49.866552 2073073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:01:50.260935 2073073 provision.go:177] copyRemoteCerts
	I1216 04:01:50.261010 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:01:50.261061 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.278254 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.374622 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:01:50.392129 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:01:50.409364 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:01:50.427428 2073073 provision.go:87] duration metric: took 580.130211ms to configureAuth
	I1216 04:01:50.427478 2073073 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:01:50.427668 2073073 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:50.427681 2073073 machine.go:97] duration metric: took 4.069230888s to provisionDockerMachine
	I1216 04:01:50.427689 2073073 client.go:176] duration metric: took 9.842984311s to LocalClient.Create
	I1216 04:01:50.427703 2073073 start.go:167] duration metric: took 9.843048588s to libmachine.API.Create "newest-cni-450938"
	I1216 04:01:50.427714 2073073 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:01:50.427724 2073073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:01:50.427814 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:01:50.427858 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.444571 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.543256 2073073 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:01:50.546463 2073073 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:01:50.546490 2073073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:01:50.546502 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:01:50.546555 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:01:50.546641 2073073 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:01:50.546744 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:01:50.554130 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:50.571399 2073073 start.go:296] duration metric: took 143.669232ms for postStartSetup
	I1216 04:01:50.571809 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.589075 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:50.589367 2073073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:01:50.589424 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.606538 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.701772 2073073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:01:50.711675 2073073 start.go:128] duration metric: took 10.130806483s to createHost
	I1216 04:01:50.711705 2073073 start.go:83] releasing machines lock for "newest-cni-450938", held for 10.130943333s
	I1216 04:01:50.711776 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.732853 2073073 ssh_runner.go:195] Run: cat /version.json
	I1216 04:01:50.732921 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.733181 2073073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:01:50.733238 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.768130 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.773572 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.871263 2073073 ssh_runner.go:195] Run: systemctl --version
	I1216 04:01:50.965373 2073073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:01:50.969981 2073073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:01:50.970086 2073073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:01:51.000188 2073073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:01:51.000222 2073073 start.go:496] detecting cgroup driver to use...
	I1216 04:01:51.000256 2073073 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:01:51.000314 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:01:51.019286 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:01:51.033299 2073073 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:01:51.033403 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:01:51.051418 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:01:51.070273 2073073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:01:51.194121 2073073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:01:51.311613 2073073 docker.go:234] disabling docker service ...
	I1216 04:01:51.311729 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:01:51.333815 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:01:51.346480 2073073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:01:51.470333 2073073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:01:51.603299 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:01:51.616625 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:01:51.630599 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:01:51.640005 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:01:51.649178 2073073 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:01:51.649257 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:01:51.658373 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.667673 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:01:51.676660 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.685480 2073073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:01:51.694285 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:01:51.703488 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:01:51.712372 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:01:51.721367 2073073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:01:51.729097 2073073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:01:51.736893 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:51.844375 2073073 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:01:51.993981 2073073 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:01:51.994107 2073073 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:01:51.998351 2073073 start.go:564] Will wait 60s for crictl version
	I1216 04:01:51.998465 2073073 ssh_runner.go:195] Run: which crictl
	I1216 04:01:52.005463 2073073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:01:52.032896 2073073 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:01:52.032981 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.059717 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.085644 2073073 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:01:52.088617 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:52.106258 2073073 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:01:52.110161 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.122893 2073073 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:01:52.125844 2073073 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:01:52.126001 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:52.126091 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.152470 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.152498 2073073 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:01:52.152563 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.176896 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.176919 2073073 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:01:52.176928 2073073 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:01:52.177016 2073073 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:01:52.177086 2073073 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:01:52.218042 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:52.218071 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:52.218119 2073073 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:01:52.218150 2073073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:01:52.218321 2073073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:01:52.218398 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:01:52.230127 2073073 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:01:52.230208 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:01:52.239812 2073073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:01:52.255679 2073073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:01:52.270419 2073073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:01:52.284034 2073073 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:01:52.287803 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.297256 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:52.412361 2073073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:01:52.428888 2073073 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:01:52.428959 2073073 certs.go:195] generating shared ca certs ...
	I1216 04:01:52.428991 2073073 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.429192 2073073 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:01:52.429285 2073073 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:01:52.429319 2073073 certs.go:257] generating profile certs ...
	I1216 04:01:52.429409 2073073 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:01:52.429451 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt with IP's: []
	I1216 04:01:52.591834 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt ...
	I1216 04:01:52.591928 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt: {Name:mk7778fd64a4e46926332e38f467016f166dd4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592375 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key ...
	I1216 04:01:52.592423 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key: {Name:mk64ab6c72a270d4e474bc857c4508cc11c704c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592850 2073073 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:01:52.592903 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 04:01:52.672242 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c ...
	I1216 04:01:52.672287 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c: {Name:mk3c094233344d156b233623b9dbfae4496ab12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672537 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c ...
	I1216 04:01:52.672554 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c: {Name:mke958b63de0c9e687b9653a66eec1e3497a17af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672658 2073073 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt
	I1216 04:01:52.672758 2073073 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key
	I1216 04:01:52.672837 2073073 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:01:52.672864 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt with IP's: []
	I1216 04:01:53.025120 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt ...
	I1216 04:01:53.025154 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt: {Name:mkca565dc28355ccf88123a839d9cc0986e3f757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025346 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key ...
	I1216 04:01:53.025361 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key: {Name:mkb5acdd577d99db642b84842da90293bb2494a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025563 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:01:53.025610 2073073 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:01:53.025625 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:01:53.025652 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:01:53.025681 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:01:53.025711 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:01:53.025764 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:53.026345 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:01:53.047604 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:01:53.067106 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:01:53.086400 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:01:53.106958 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:01:53.125852 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:01:53.144046 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:01:53.162443 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:01:53.180617 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:01:53.202914 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:01:53.228308 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:01:53.254030 2073073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:01:53.268427 2073073 ssh_runner.go:195] Run: openssl version
	I1216 04:01:53.275148 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.283060 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:01:53.291347 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295430 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295543 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.338110 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:01:53.345692 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 04:01:53.354101 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.361981 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:01:53.369807 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.373913 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.374034 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.415192 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.422756 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.430342 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.438151 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:01:53.446120 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450114 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450180 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.491333 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:01:53.498914 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:01:53.506771 2073073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:01:53.510538 2073073 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:01:53.510593 2073073 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:53.510681 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:01:53.510746 2073073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:01:53.537047 2073073 cri.go:89] found id: ""
	I1216 04:01:53.537176 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:01:53.545264 2073073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:01:53.553401 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:01:53.553502 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:01:53.561504 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:01:53.561527 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:01:53.561581 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:01:53.569732 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:01:53.569844 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:01:53.577622 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:01:53.585671 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:01:53.585743 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:01:53.593272 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.601710 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:01:53.601791 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.609698 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:01:53.617871 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:01:53.617953 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:01:53.625500 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:01:53.665591 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:01:53.665653 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:01:53.769108 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:01:53.769186 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:01:53.769232 2073073 kubeadm.go:319] OS: Linux
	I1216 04:01:53.769281 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:01:53.769333 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:01:53.769384 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:01:53.769436 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:01:53.769489 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:01:53.769544 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:01:53.769592 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:01:53.769644 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:01:53.769694 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:01:53.843812 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:01:53.843931 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:01:53.844032 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:01:53.849932 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:01:53.856727 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:01:53.856901 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:01:53.857012 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:01:54.280084 2073073 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:01:54.512481 2073073 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:01:55.160883 2073073 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:01:55.382188 2073073 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:01:55.675582 2073073 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:01:55.675752 2073073 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:55.934138 2073073 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:01:55.934424 2073073 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:56.047522 2073073 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:01:56.247778 2073073 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:01:56.462583 2073073 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:01:56.462916 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:01:56.695545 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:01:56.807074 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:01:56.888027 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:01:57.401338 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:01:57.476073 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:01:57.476371 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:01:57.480701 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:01:57.484671 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:01:57.484788 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:01:57.484862 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:01:57.485264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:01:57.510728 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:01:57.510840 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:01:57.520316 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:01:57.521983 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:01:57.522283 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:01:57.655090 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:01:57.655217 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:05:57.654978 2073073 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000290251s
	I1216 04:05:57.655319 2073073 kubeadm.go:319] 
	I1216 04:05:57.655442 2073073 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:05:57.655501 2073073 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:05:57.655753 2073073 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:05:57.655760 2073073 kubeadm.go:319] 
	I1216 04:05:57.656095 2073073 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:05:57.656161 2073073 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:05:57.656330 2073073 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:05:57.656339 2073073 kubeadm.go:319] 
	I1216 04:05:57.661429 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:05:57.661908 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:05:57.662048 2073073 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:05:57.662311 2073073 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:05:57.662326 2073073 kubeadm.go:319] 
	I1216 04:05:57.662412 2073073 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:05:57.662579 2073073 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000290251s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000290251s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:05:57.662661 2073073 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 04:05:58.084120 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:05:58.098877 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:05:58.098960 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:05:58.107810 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:05:58.107839 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:05:58.107907 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:05:58.116252 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:05:58.116319 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:05:58.123966 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:05:58.131928 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:05:58.131999 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:05:58.139938 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:05:58.148354 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:05:58.148421 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:05:58.155951 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:05:58.163949 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:05:58.164019 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:05:58.172134 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:05:58.209714 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:05:58.209936 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:05:58.280761 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:05:58.280869 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:05:58.280943 2073073 kubeadm.go:319] OS: Linux
	I1216 04:05:58.281014 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:05:58.281081 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:05:58.281135 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:05:58.281192 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:05:58.281251 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:05:58.281316 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:05:58.281370 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:05:58.281425 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:05:58.281480 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:05:58.347935 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:05:58.348070 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:05:58.348235 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:05:58.355578 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:05:58.361023 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:05:58.361193 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:05:58.361322 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:05:58.361438 2073073 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:05:58.361549 2073073 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:05:58.361663 2073073 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:05:58.367384 2073073 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:05:58.367458 2073073 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:05:58.367521 2073073 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:05:58.367595 2073073 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:05:58.367668 2073073 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:05:58.367706 2073073 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:05:58.367762 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:05:58.550047 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:05:59.040542 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:05:59.832816 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:06:00.196554 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:06:00.344590 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:06:00.344735 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:06:00.344804 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:06:00.348130 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:06:00.348264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:06:00.348345 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:06:00.348416 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:06:00.386806 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:06:00.386918 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:06:00.399084 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:06:00.400082 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:06:00.400137 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:06:00.555518 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:06:00.555632 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:10:00.553098 2073073 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000446161s
	I1216 04:10:00.553139 2073073 kubeadm.go:319] 
	I1216 04:10:00.553240 2073073 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:10:00.553447 2073073 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:10:00.553632 2073073 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:10:00.553642 2073073 kubeadm.go:319] 
	I1216 04:10:00.554240 2073073 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:10:00.554310 2073073 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:10:00.554364 2073073 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:10:00.554369 2073073 kubeadm.go:319] 
	I1216 04:10:00.559672 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:10:00.560440 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:10:00.560638 2073073 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:10:00.560897 2073073 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:10:00.560908 2073073 kubeadm.go:319] 
	I1216 04:10:00.561045 2073073 kubeadm.go:403] duration metric: took 8m7.05045578s to StartCluster
	I1216 04:10:00.561088 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:10:00.561095 2073073 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:10:00.561160 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:10:00.591750 2073073 cri.go:89] found id: ""
	I1216 04:10:00.591842 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.591857 2073073 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:10:00.591866 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:10:00.591936 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:10:00.621403 2073073 cri.go:89] found id: ""
	I1216 04:10:00.621441 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.621454 2073073 logs.go:284] No container was found matching "etcd"
	I1216 04:10:00.621463 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:10:00.621538 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:10:00.650404 2073073 cri.go:89] found id: ""
	I1216 04:10:00.650434 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.650444 2073073 logs.go:284] No container was found matching "coredns"
	I1216 04:10:00.650451 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:10:00.650524 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:10:00.680445 2073073 cri.go:89] found id: ""
	I1216 04:10:00.680521 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.680536 2073073 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:10:00.680543 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:10:00.680611 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:10:00.710361 2073073 cri.go:89] found id: ""
	I1216 04:10:00.710396 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.710406 2073073 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:10:00.710412 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:10:00.710473 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:10:00.748242 2073073 cri.go:89] found id: ""
	I1216 04:10:00.748318 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.748352 2073073 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:10:00.748389 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:10:00.748488 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:10:00.780257 2073073 cri.go:89] found id: ""
	I1216 04:10:00.780341 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.780365 2073073 logs.go:284] No container was found matching "kindnet"
	I1216 04:10:00.780402 2073073 logs.go:123] Gathering logs for kubelet ...
	I1216 04:10:00.780432 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:10:00.837018 2073073 logs.go:123] Gathering logs for dmesg ...
	I1216 04:10:00.837057 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:10:00.854084 2073073 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:10:00.854114 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:10:00.921357 2073073 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:10:00.911353    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.912410    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.913336    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915304    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915802    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:10:00.911353    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.912410    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.913336    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915304    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915802    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:10:00.921436 2073073 logs.go:123] Gathering logs for containerd ...
	I1216 04:10:00.921463 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:10:00.962148 2073073 logs.go:123] Gathering logs for container status ...
	I1216 04:10:00.962187 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:10:00.992078 2073073 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:10:00.992139 2073073 out.go:285] * 
	* 
	W1216 04:10:00.992191 2073073 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:10:00.992219 2073073 out.go:285] * 
	* 
	W1216 04:10:00.994876 2073073 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:10:01.000867 2073073 out.go:203] 
	W1216 04:10:01.005125 2073073 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:10:01.005428 2073073 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:10:01.005513 2073073 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:10:01.011350 2073073 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-450938
helpers_test.go:244: (dbg) docker inspect newest-cni-450938:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	        "Created": "2025-12-16T04:01:45.321904496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2073503,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:01:45.386518816Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hosts",
	        "LogPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65-json.log",
	        "Name": "/newest-cni-450938",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-450938:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-450938",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	                "LowerDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-450938",
	                "Source": "/var/lib/docker/volumes/newest-cni-450938/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-450938",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-450938",
	                "name.minikube.sigs.k8s.io": "newest-cni-450938",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53b7af9d36189bc075fe07c3b0e2530c19a08a8195afda92335ea20af6a0ae37",
	            "SandboxKey": "/var/run/docker/netns/53b7af9d3618",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34659"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34660"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34663"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34661"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34662"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-450938": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:67:74:12:6c:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "961937bd6f37532287f488797e74382e326ca0852d2ef3f8a1d23a546f1f7d1a",
	                    "EndpointID": "959007b5102d8f520c150f1b38dcce2db8d49e04ba955be8676da8afebfb51e3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-450938",
	                        "e2dde4cac2e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938: exit status 6 (393.570656ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:10:01.479325 2085309 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-450938" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-450938 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-092028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ stop    │ -p embed-certs-092028 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-092028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:58 UTC │
	│ image   │ embed-certs-092028 image list --format=json                                                                                                                                                                                                                │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ pause   │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ unpause │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:02 UTC │                     │
	│ stop    │ -p no-preload-255023 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ addons  │ enable dashboard -p no-preload-255023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ start   │ -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:04:36
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:04:36.142328 2078887 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:04:36.142562 2078887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:04:36.142588 2078887 out.go:374] Setting ErrFile to fd 2...
	I1216 04:04:36.142607 2078887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:04:36.142894 2078887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:04:36.143393 2078887 out.go:368] Setting JSON to false
	I1216 04:04:36.144368 2078887 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35221,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:04:36.144465 2078887 start.go:143] virtualization:  
	I1216 04:04:36.150070 2078887 out.go:179] * [no-preload-255023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:04:36.153020 2078887 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:04:36.153105 2078887 notify.go:221] Checking for updates...
	I1216 04:04:36.158759 2078887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:04:36.161685 2078887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:36.164397 2078887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:04:36.167148 2078887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:04:36.169926 2078887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:04:36.173114 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:36.173672 2078887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:04:36.208296 2078887 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:04:36.208429 2078887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:04:36.272451 2078887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:04:36.263127415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:04:36.272558 2078887 docker.go:319] overlay module found
	I1216 04:04:36.275603 2078887 out.go:179] * Using the docker driver based on existing profile
	I1216 04:04:36.278393 2078887 start.go:309] selected driver: docker
	I1216 04:04:36.278413 2078887 start.go:927] validating driver "docker" against &{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:36.278512 2078887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:04:36.279246 2078887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:04:36.337226 2078887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:04:36.327670673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:04:36.337567 2078887 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:04:36.337598 2078887 cni.go:84] Creating CNI manager for ""
	I1216 04:04:36.337648 2078887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:04:36.337694 2078887 start.go:353] cluster config:
	{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:36.342771 2078887 out.go:179] * Starting "no-preload-255023" primary control-plane node in "no-preload-255023" cluster
	I1216 04:04:36.345786 2078887 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:04:36.348879 2078887 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:04:36.351831 2078887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:04:36.352008 2078887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 04:04:36.352389 2078887 cache.go:107] acquiring lock: {Name:mk0450325aacc7460afde2487596c0895eb14316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352472 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 04:04:36.352485 2078887 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.485µs
	I1216 04:04:36.352508 2078887 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 04:04:36.352528 2078887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:04:36.352738 2078887 cache.go:107] acquiring lock: {Name:mkc870fc6c12b387ee25e1b9ca9a320632395941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352823 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1216 04:04:36.352838 2078887 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 104.014µs
	I1216 04:04:36.352845 2078887 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1216 04:04:36.352864 2078887 cache.go:107] acquiring lock: {Name:mk6b703a23a3ab5a8bd9af36cf3a59f27d4e1f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352901 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1216 04:04:36.352910 2078887 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 48.311µs
	I1216 04:04:36.352917 2078887 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 04:04:36.352934 2078887 cache.go:107] acquiring lock: {Name:mk60dd72305503c0ea2e16b1d16ccd8081a54f90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352967 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1216 04:04:36.352983 2078887 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 49.427µs
	I1216 04:04:36.352990 2078887 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 04:04:36.353002 2078887 cache.go:107] acquiring lock: {Name:mk6fa36dfa510ec7b8233463c2d901c70484a816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353044 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1216 04:04:36.353053 2078887 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 53.111µs
	I1216 04:04:36.353060 2078887 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 04:04:36.353096 2078887 cache.go:107] acquiring lock: {Name:mk65b0b8ff216fe2e0c76a8328b4837c4b65b152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353150 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1216 04:04:36.353161 2078887 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 80.704µs
	I1216 04:04:36.353167 2078887 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1216 04:04:36.353185 2078887 cache.go:107] acquiring lock: {Name:mk91af5531a8fba3ae1331bf11e776d4365c8b42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353224 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1216 04:04:36.353234 2078887 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 51.182µs
	I1216 04:04:36.353241 2078887 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1216 04:04:36.353254 2078887 cache.go:107] acquiring lock: {Name:mke4e5785550dce8ce0ae772cb7060b431e39dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353286 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1216 04:04:36.353295 2078887 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.321µs
	I1216 04:04:36.353301 2078887 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1216 04:04:36.353308 2078887 cache.go:87] Successfully saved all images to host disk.
	I1216 04:04:36.371527 2078887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:04:36.371552 2078887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:04:36.371574 2078887 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:04:36.371606 2078887 start.go:360] acquireMachinesLock for no-preload-255023: {Name:mkc3fbe159f35ba61346866b1384afc1dc23074c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.371679 2078887 start.go:364] duration metric: took 52.75µs to acquireMachinesLock for "no-preload-255023"
	I1216 04:04:36.371703 2078887 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:04:36.371713 2078887 fix.go:54] fixHost starting: 
	I1216 04:04:36.371983 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:36.389190 2078887 fix.go:112] recreateIfNeeded on no-preload-255023: state=Stopped err=<nil>
	W1216 04:04:36.389224 2078887 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:04:36.392475 2078887 out.go:252] * Restarting existing docker container for "no-preload-255023" ...
	I1216 04:04:36.392575 2078887 cli_runner.go:164] Run: docker start no-preload-255023
	I1216 04:04:36.686743 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:36.709621 2078887 kic.go:430] container "no-preload-255023" state is running.
	I1216 04:04:36.710033 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:36.742909 2078887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 04:04:36.743215 2078887 machine.go:94] provisionDockerMachine start ...
	I1216 04:04:36.743307 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:36.771624 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:36.772082 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:36.772113 2078887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:04:36.772685 2078887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51402->127.0.0.1:34664: read: connection reset by peer
	I1216 04:04:39.911257 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 04:04:39.911284 2078887 ubuntu.go:182] provisioning hostname "no-preload-255023"
	I1216 04:04:39.911351 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:39.929644 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:39.929951 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:39.929968 2078887 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-255023 && echo "no-preload-255023" | sudo tee /etc/hostname
	I1216 04:04:40.091006 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 04:04:40.091150 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.111323 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:40.111660 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:40.111686 2078887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-255023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-255023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-255023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:04:40.255648 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:04:40.255679 2078887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:04:40.255708 2078887 ubuntu.go:190] setting up certificates
	I1216 04:04:40.255719 2078887 provision.go:84] configureAuth start
	I1216 04:04:40.255800 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:40.274459 2078887 provision.go:143] copyHostCerts
	I1216 04:04:40.274544 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:04:40.274559 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:04:40.274643 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:04:40.274749 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:04:40.274761 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:04:40.274788 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:04:40.274850 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:04:40.274858 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:04:40.274882 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:04:40.274932 2078887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.no-preload-255023 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-255023]
	I1216 04:04:40.540362 2078887 provision.go:177] copyRemoteCerts
	I1216 04:04:40.540434 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:04:40.540481 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.560258 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.658891 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:04:40.677291 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:04:40.696276 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 04:04:40.714152 2078887 provision.go:87] duration metric: took 458.418313ms to configureAuth
	I1216 04:04:40.714179 2078887 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:04:40.714393 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:40.714406 2078887 machine.go:97] duration metric: took 3.971173434s to provisionDockerMachine
	I1216 04:04:40.714414 2078887 start.go:293] postStartSetup for "no-preload-255023" (driver="docker")
	I1216 04:04:40.714431 2078887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:04:40.714490 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:04:40.714532 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.731640 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.827149 2078887 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:04:40.830526 2078887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:04:40.830554 2078887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:04:40.830567 2078887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:04:40.830622 2078887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:04:40.830706 2078887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:04:40.830809 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:04:40.838400 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:04:40.856071 2078887 start.go:296] duration metric: took 141.636209ms for postStartSetup
	I1216 04:04:40.856173 2078887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:04:40.856212 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.873995 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.968232 2078887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:04:40.973380 2078887 fix.go:56] duration metric: took 4.601659976s for fixHost
	I1216 04:04:40.973407 2078887 start.go:83] releasing machines lock for "no-preload-255023", held for 4.601715131s
	I1216 04:04:40.973483 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:40.991467 2078887 ssh_runner.go:195] Run: cat /version.json
	I1216 04:04:40.991532 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.991607 2078887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:04:40.991672 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:41.016410 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:41.023238 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:41.219027 2078887 ssh_runner.go:195] Run: systemctl --version
	I1216 04:04:41.225735 2078887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:04:41.231530 2078887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:04:41.231614 2078887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:04:41.245369 2078887 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:04:41.245409 2078887 start.go:496] detecting cgroup driver to use...
	I1216 04:04:41.245441 2078887 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:04:41.245491 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:04:41.264763 2078887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:04:41.278940 2078887 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:04:41.279078 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:04:41.295177 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:04:41.308854 2078887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:04:41.425808 2078887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:04:41.539126 2078887 docker.go:234] disabling docker service ...
	I1216 04:04:41.539232 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:04:41.555103 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:04:41.569579 2078887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:04:41.697114 2078887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:04:41.825875 2078887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:04:41.840190 2078887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:04:41.856382 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:04:41.866837 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:04:41.876037 2078887 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:04:41.876170 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:04:41.885348 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:04:41.894763 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:04:41.904382 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:04:41.913120 2078887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:04:41.922033 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:04:41.931520 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:04:41.940760 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:04:41.953916 2078887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:04:41.967109 2078887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:04:41.975264 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:42.127685 2078887 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:04:42.257333 2078887 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:04:42.257501 2078887 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:04:42.262740 2078887 start.go:564] Will wait 60s for crictl version
	I1216 04:04:42.262889 2078887 ssh_runner.go:195] Run: which crictl
	I1216 04:04:42.267776 2078887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:04:42.299498 2078887 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:04:42.299668 2078887 ssh_runner.go:195] Run: containerd --version
	I1216 04:04:42.325553 2078887 ssh_runner.go:195] Run: containerd --version
	I1216 04:04:42.351925 2078887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:04:42.355177 2078887 cli_runner.go:164] Run: docker network inspect no-preload-255023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:04:42.376901 2078887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 04:04:42.381129 2078887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:04:42.391782 2078887 kubeadm.go:884] updating cluster {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:04:42.391898 2078887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:04:42.391946 2078887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:04:42.421381 2078887 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:04:42.421429 2078887 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:04:42.421437 2078887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:04:42.421531 2078887 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-255023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:04:42.421601 2078887 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:04:42.451031 2078887 cni.go:84] Creating CNI manager for ""
	I1216 04:04:42.451088 2078887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:04:42.451111 2078887 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:04:42.451134 2078887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-255023 NodeName:no-preload-255023 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:04:42.451548 2078887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-255023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:04:42.451660 2078887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:04:42.462557 2078887 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:04:42.462665 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:04:42.470706 2078887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:04:42.484036 2078887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:04:42.496679 2078887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 04:04:42.510060 2078887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:04:42.514034 2078887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:04:42.523944 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:42.642280 2078887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:04:42.658128 2078887 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023 for IP: 192.168.85.2
	I1216 04:04:42.658161 2078887 certs.go:195] generating shared ca certs ...
	I1216 04:04:42.658178 2078887 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:42.658357 2078887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:04:42.658425 2078887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:04:42.658440 2078887 certs.go:257] generating profile certs ...
	I1216 04:04:42.658560 2078887 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.key
	I1216 04:04:42.658648 2078887 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key.f898ebc5
	I1216 04:04:42.658713 2078887 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key
	I1216 04:04:42.658847 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:04:42.658904 2078887 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:04:42.658920 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:04:42.658963 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:04:42.659011 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:04:42.659085 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:04:42.659170 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:04:42.659889 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:04:42.682344 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:04:42.731773 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:04:42.759464 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:04:42.781713 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:04:42.800339 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:04:42.819107 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:04:42.837811 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 04:04:42.856139 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:04:42.873711 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:04:42.892395 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:04:42.910549 2078887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:04:42.924760 2078887 ssh_runner.go:195] Run: openssl version
	I1216 04:04:42.931736 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.940294 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:04:42.948204 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.952285 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.952396 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.993553 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:04:43.001452 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.010861 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:04:43.019267 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.023881 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.023989 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.065733 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:04:43.074014 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.082044 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:04:43.090335 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.094833 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.094908 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.137155 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:04:43.145351 2078887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:04:43.149907 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:04:43.192388 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:04:43.235812 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:04:43.277441 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:04:43.318805 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:04:43.360025 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:04:43.402731 2078887 kubeadm.go:401] StartCluster: {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:43.402829 2078887 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:04:43.402928 2078887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:04:43.429949 2078887 cri.go:89] found id: ""
	I1216 04:04:43.430063 2078887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:04:43.452392 2078887 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:04:43.452428 2078887 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:04:43.452517 2078887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:04:43.466566 2078887 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:04:43.467070 2078887 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:43.467226 2078887 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-255023" cluster setting kubeconfig missing "no-preload-255023" context setting]
	I1216 04:04:43.467608 2078887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.469283 2078887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:04:43.485730 2078887 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 04:04:43.485775 2078887 kubeadm.go:602] duration metric: took 33.340688ms to restartPrimaryControlPlane
	I1216 04:04:43.485805 2078887 kubeadm.go:403] duration metric: took 83.08421ms to StartCluster
	I1216 04:04:43.485836 2078887 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.485913 2078887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:43.486639 2078887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.486917 2078887 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:04:43.487330 2078887 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:04:43.487405 2078887 addons.go:70] Setting storage-provisioner=true in profile "no-preload-255023"
	I1216 04:04:43.487422 2078887 addons.go:239] Setting addon storage-provisioner=true in "no-preload-255023"
	I1216 04:04:43.487445 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.488102 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.488423 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:43.488504 2078887 addons.go:70] Setting dashboard=true in profile "no-preload-255023"
	I1216 04:04:43.488521 2078887 addons.go:239] Setting addon dashboard=true in "no-preload-255023"
	W1216 04:04:43.488541 2078887 addons.go:248] addon dashboard should already be in state true
	I1216 04:04:43.488579 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.489074 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.491900 2078887 addons.go:70] Setting default-storageclass=true in profile "no-preload-255023"
	I1216 04:04:43.491932 2078887 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-255023"
	I1216 04:04:43.492873 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.492947 2078887 out.go:179] * Verifying Kubernetes components...
	I1216 04:04:43.501909 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:43.540041 2078887 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 04:04:43.544945 2078887 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 04:04:43.547811 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 04:04:43.547843 2078887 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 04:04:43.547914 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.559171 2078887 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:04:43.559336 2078887 addons.go:239] Setting addon default-storageclass=true in "no-preload-255023"
	I1216 04:04:43.559370 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.559803 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.563234 2078887 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:43.563261 2078887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:04:43.563329 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.613200 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.627516 2078887 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:04:43.627538 2078887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:04:43.627600 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.647225 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.663344 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.730458 2078887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:04:43.761779 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 04:04:43.761800 2078887 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 04:04:43.776412 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 04:04:43.776431 2078887 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 04:04:43.790891 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 04:04:43.790913 2078887 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 04:04:43.792062 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:43.811412 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 04:04:43.811477 2078887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 04:04:43.827119 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 04:04:43.827185 2078887 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 04:04:43.838623 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:04:43.851891 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 04:04:43.851965 2078887 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 04:04:43.868373 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 04:04:43.868445 2078887 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 04:04:43.883425 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 04:04:43.883498 2078887 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 04:04:43.898225 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:43.898297 2078887 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 04:04:43.913600 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:44.438735 2078887 node_ready.go:35] waiting up to 6m0s for node "no-preload-255023" to be "Ready" ...
	W1216 04:04:44.439130 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439312 2078887 retry.go:31] will retry after 305.613762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.439316 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439337 2078887 retry.go:31] will retry after 363.187652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.439533 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439559 2078887 retry.go:31] will retry after 272.903595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.713163 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:44.745739 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:44.781147 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.781180 2078887 retry.go:31] will retry after 329.721194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.803439 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:44.821890 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.821920 2078887 retry.go:31] will retry after 342.537223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.869557 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.869592 2078887 retry.go:31] will retry after 400.087881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.112248 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:45.165426 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:45.247199 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.247273 2078887 retry.go:31] will retry after 632.091254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.270745 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:45.301341 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.301432 2078887 retry.go:31] will retry after 431.279641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:45.357125 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.357163 2078887 retry.go:31] will retry after 448.988888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.733393 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:45.794896 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.794941 2078887 retry.go:31] will retry after 735.19991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.807205 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:45.867083 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.867117 2078887 retry.go:31] will retry after 568.360561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.880293 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:45.942564 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.942654 2078887 retry.go:31] will retry after 591.592305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.436264 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:46.439868 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:04:46.515391 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.515423 2078887 retry.go:31] will retry after 863.502918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.530605 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:46.535089 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:46.607927 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.607962 2078887 retry.go:31] will retry after 1.115944939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:46.613433 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.613467 2078887 retry.go:31] will retry after 961.68966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.379736 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:47.458969 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.459090 2078887 retry.go:31] will retry after 1.606575866s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.575407 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:47.642476 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.642514 2078887 retry.go:31] will retry after 2.560273252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.724901 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:47.785232 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.785270 2078887 retry.go:31] will retry after 2.616642999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:48.939418 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:49.066818 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:49.131769 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:49.131806 2078887 retry.go:31] will retry after 3.366815571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.203910 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:50.281554 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.281591 2078887 retry.go:31] will retry after 3.322699521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.403034 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:50.475418 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.475451 2078887 retry.go:31] will retry after 3.920781833s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:50.940166 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:52.499306 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:52.566228 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:52.566262 2078887 retry.go:31] will retry after 2.315880156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:53.440268 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:53.604610 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:53.664371 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:53.664411 2078887 retry.go:31] will retry after 4.867931094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.396477 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:54.458906 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.458940 2078887 retry.go:31] will retry after 6.25682185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.882414 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:54.945906 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.945940 2078887 retry.go:31] will retry after 8.419891658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:55.939826 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:04:58.439821 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:58.533209 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:58.597277 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:58.597315 2078887 retry.go:31] will retry after 8.821330278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:00.440193 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:00.716680 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:00.792490 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:00.792525 2078887 retry.go:31] will retry after 4.988340186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:02.939239 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:03.366954 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:03.427635 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:03.427664 2078887 retry.go:31] will retry after 11.977275357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:04.939595 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:05.781026 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:05.843492 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:05.843528 2078887 retry.go:31] will retry after 12.145550583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:06.939757 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:07.419555 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:07.505584 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:07.505621 2078887 retry.go:31] will retry after 12.780052365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:08.940202 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:11.440118 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:13.940295 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:15.405274 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:15.464480 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:15.464513 2078887 retry.go:31] will retry after 7.284769957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:16.439936 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:17.989703 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:18.058004 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:18.058043 2078887 retry.go:31] will retry after 16.677849322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:18.440048 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:20.286526 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:20.345776 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:20.345812 2078887 retry.go:31] will retry after 16.385541559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:20.939362 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:22.749528 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:22.811867 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:22.811905 2078887 retry.go:31] will retry after 14.258552084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:22.939418 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:25.439972 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:27.940078 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:30.440042 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:32.939848 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:34.736331 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:34.794887 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:34.794921 2078887 retry.go:31] will retry after 31.126157271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:35.439532 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:36.732300 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:36.795769 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:36.795804 2078887 retry.go:31] will retry after 23.567098644s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:37.070890 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:37.130033 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:37.130066 2078887 retry.go:31] will retry after 22.575569039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:37.439758 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:39.439923 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:41.939932 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:44.439453 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:46.440129 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:48.939948 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:51.440009 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:53.939968 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:57.654978 2073073 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000290251s
	I1216 04:05:57.655319 2073073 kubeadm.go:319] 
	I1216 04:05:57.655442 2073073 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:05:57.655501 2073073 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:05:57.655753 2073073 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:05:57.655760 2073073 kubeadm.go:319] 
	I1216 04:05:57.656095 2073073 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:05:57.656161 2073073 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:05:57.656330 2073073 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:05:57.656339 2073073 kubeadm.go:319] 
	I1216 04:05:57.661429 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:05:57.661908 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:05:57.662048 2073073 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:05:57.662311 2073073 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:05:57.662326 2073073 kubeadm.go:319] 
	I1216 04:05:57.662412 2073073 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:05:57.662579 2073073 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000290251s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:05:57.662661 2073073 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 04:05:58.084120 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:05:58.098877 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:05:58.098960 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:05:58.107810 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:05:58.107839 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:05:58.107907 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:05:58.116252 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:05:58.116319 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:05:58.123966 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:05:58.131928 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:05:58.131999 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:05:58.139938 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:05:58.148354 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:05:58.148421 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:05:58.155951 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:05:58.163949 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:05:58.164019 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:05:58.172134 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:05:58.209714 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:05:58.209936 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:05:58.280761 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:05:58.280869 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:05:58.280943 2073073 kubeadm.go:319] OS: Linux
	I1216 04:05:58.281014 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:05:58.281081 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:05:58.281135 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:05:58.281192 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:05:58.281251 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:05:58.281316 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:05:58.281370 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:05:58.281425 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:05:58.281480 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:05:58.347935 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:05:58.348070 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:05:58.348235 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:05:58.355578 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:05:58.361023 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:05:58.361193 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:05:58.361322 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:05:58.361438 2073073 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:05:58.361549 2073073 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:05:58.361663 2073073 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:05:58.367384 2073073 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:05:58.367458 2073073 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:05:58.367521 2073073 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:05:58.367595 2073073 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:05:58.367668 2073073 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:05:58.367706 2073073 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:05:58.367762 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:05:58.550047 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:05:59.040542 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:05:59.832816 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:06:00.196554 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:06:00.344590 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:06:00.344735 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:06:00.344804 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:06:00.348130 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:06:00.348264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:06:00.348345 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:06:00.348416 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	W1216 04:05:56.439836 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:58.939363 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:59.706828 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:59.787641 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:59.787749 2078887 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:06:00.368445 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:06:00.461740 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:06:00.461775 2078887 retry.go:31] will retry after 38.977225184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:00.939472 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:00.386806 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:06:00.386918 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:06:00.399084 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:06:00.400082 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:06:00.400137 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:06:00.555518 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:06:00.555632 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1216 04:06:03.439401 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:05.439853 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:05.921308 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:06:06.013608 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:06:06.013643 2078887 retry.go:31] will retry after 27.262873571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:07.440089 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:09.440233 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:11.939830 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:13.940070 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:16.440297 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:18.940013 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:21.439376 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:23.440046 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:25.440167 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:27.939439 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:29.939765 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:31.940029 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:33.277682 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:06:33.336094 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:33.336187 2078887 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1216 04:06:34.439449 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:36.440148 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:38.939781 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:39.439610 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:06:39.498473 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:39.498576 2078887 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:06:39.501951 2078887 out.go:179] * Enabled addons: 
	I1216 04:06:39.505615 2078887 addons.go:530] duration metric: took 1m56.018282146s for enable addons: enabled=[]
	W1216 04:06:40.939880 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:42.942826 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:45.439332 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:47.439764 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:49.439998 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:51.939468 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:53.940191 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:56.440248 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:58.940043 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:01.439817 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:03.440122 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:05.940233 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:08.440149 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:10.440203 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:12.940001 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:15.439450 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:17.940353 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:20.439655 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:22.939251 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:24.939985 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:26.940134 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:29.439278 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:31.440124 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:33.940184 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:36.440157 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:38.940079 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:41.440316 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:43.939340 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:45.939890 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:47.940062 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:50.439942 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:52.440011 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:54.440323 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:56.939899 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:59.439323 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:01.439378 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:03.939463 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:06.439343 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:08.440329 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:10.939526 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:13.439382 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:15.439798 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:17.939298 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:20.439304 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:22.439457 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:24.939305 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:26.940098 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:28.940259 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:31.439311 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:33.440174 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:35.939344 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:37.940078 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:40.439899 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:42.939335 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:44.939411 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:46.939890 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:49.439873 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:51.440082 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:53.939432 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:55.939868 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:57.940176 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:00.440304 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:02.939752 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:05.439377 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:07.939797 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:09.939985 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:12.439307 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:14.439467 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:16.939931 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:18.940258 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:21.439950 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:23.440228 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:25.939594 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:27.940217 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:30.439353 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:32.439932 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:34.939928 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:36.940259 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:39.439438 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:41.440249 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:43.939955 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:45.940052 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:48.439303 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:50.940139 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:52.940243 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:55.440234 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:10:00.553098 2073073 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000446161s
	I1216 04:10:00.553139 2073073 kubeadm.go:319] 
	I1216 04:10:00.553240 2073073 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:10:00.553447 2073073 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:10:00.553632 2073073 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:10:00.553642 2073073 kubeadm.go:319] 
	I1216 04:10:00.554240 2073073 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:10:00.554310 2073073 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:10:00.554364 2073073 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:10:00.554369 2073073 kubeadm.go:319] 
	I1216 04:10:00.559672 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:10:00.560440 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:10:00.560638 2073073 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:10:00.560897 2073073 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:10:00.560908 2073073 kubeadm.go:319] 
	I1216 04:10:00.561045 2073073 kubeadm.go:403] duration metric: took 8m7.05045578s to StartCluster
	I1216 04:10:00.561088 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:10:00.561095 2073073 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:10:00.561160 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:10:00.591750 2073073 cri.go:89] found id: ""
	I1216 04:10:00.591842 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.591857 2073073 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:10:00.591866 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:10:00.591936 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:10:00.621403 2073073 cri.go:89] found id: ""
	I1216 04:10:00.621441 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.621454 2073073 logs.go:284] No container was found matching "etcd"
	I1216 04:10:00.621463 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:10:00.621538 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:10:00.650404 2073073 cri.go:89] found id: ""
	I1216 04:10:00.650434 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.650444 2073073 logs.go:284] No container was found matching "coredns"
	I1216 04:10:00.650451 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:10:00.650524 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:10:00.680445 2073073 cri.go:89] found id: ""
	I1216 04:10:00.680521 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.680536 2073073 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:10:00.680543 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:10:00.680611 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:10:00.710361 2073073 cri.go:89] found id: ""
	I1216 04:10:00.710396 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.710406 2073073 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:10:00.710412 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:10:00.710473 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:10:00.748242 2073073 cri.go:89] found id: ""
	I1216 04:10:00.748318 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.748352 2073073 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:10:00.748389 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:10:00.748488 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:10:00.780257 2073073 cri.go:89] found id: ""
	I1216 04:10:00.780341 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.780365 2073073 logs.go:284] No container was found matching "kindnet"
	I1216 04:10:00.780402 2073073 logs.go:123] Gathering logs for kubelet ...
	I1216 04:10:00.780432 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:10:00.837018 2073073 logs.go:123] Gathering logs for dmesg ...
	I1216 04:10:00.837057 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:10:00.854084 2073073 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:10:00.854114 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:10:00.921357 2073073 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:10:00.911353    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.912410    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.913336    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915304    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915802    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:10:00.911353    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.912410    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.913336    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915304    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915802    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:10:00.921436 2073073 logs.go:123] Gathering logs for containerd ...
	I1216 04:10:00.921463 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:10:00.962148 2073073 logs.go:123] Gathering logs for container status ...
	I1216 04:10:00.962187 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:10:00.992078 2073073 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:10:00.992139 2073073 out.go:285] * 
	W1216 04:10:00.992191 2073073 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:10:00.992219 2073073 out.go:285] * 
	W1216 04:10:00.994876 2073073 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:10:01.000867 2073073 out.go:203] 
	W1216 04:10:01.005125 2073073 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:10:01.005428 2073073 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:10:01.005513 2073073 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:10:01.011350 2073073 out.go:203] 
	W1216 04:09:57.939424 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:00.445310 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934013777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934085701Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934187689Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934261254Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934323858Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934386954Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934441985Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934504367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934572230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934660499Z" level=info msg="Connect containerd service"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.935078270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.935783846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.946850869Z" level=info msg="Start subscribing containerd event"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.947102868Z" level=info msg="Start recovering state"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.947203042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.947265121Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990839081Z" level=info msg="Start event monitor"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990896319Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990906747Z" level=info msg="Start streaming server"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990916880Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990926283Z" level=info msg="runtime interface starting up..."
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990938386Z" level=info msg="starting plugins..."
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990951070Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:01:51 newest-cni-450938 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.992912455Z" level=info msg="containerd successfully booted in 0.086898s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:10:02.193125    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:02.194008    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:02.195725    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:02.196233    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:02.197871    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:10:02 up  9:52,  0 user,  load average: 0.39, 0.73, 1.42
	Linux newest-cni-450938 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:09:59 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:09:59 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 16 04:09:59 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:09:59 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:09:59 newest-cni-450938 kubelet[4715]: E1216 04:09:59.982183    4715 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:09:59 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:09:59 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:10:00 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 04:10:00 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:00 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:00 newest-cni-450938 kubelet[4754]: E1216 04:10:00.764988    4754 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:10:00 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:10:00 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:10:01 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 04:10:01 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:01 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:01 newest-cni-450938 kubelet[4820]: E1216 04:10:01.468542    4820 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:10:01 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:10:01 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:10:02 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 04:10:02 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:02 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:02 newest-cni-450938 kubelet[4910]: E1216 04:10:02.249087    4910 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:10:02 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:10:02 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938: exit status 6 (358.456049ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:10:02.770664 2085541 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-450938" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-450938" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (502.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-255023 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-255023 create -f testdata/busybox.yaml: exit status 1 (58.731495ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-255023" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-255023 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-255023
helpers_test.go:244: (dbg) docker inspect no-preload-255023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	        "Created": "2025-12-16T03:54:15.810217174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2047579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:54:15.877443945Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e-json.log",
	        "Name": "/no-preload-255023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-255023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-255023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	                "LowerDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-255023",
	                "Source": "/var/lib/docker/volumes/no-preload-255023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-255023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-255023",
	                "name.minikube.sigs.k8s.io": "no-preload-255023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "256e4f9aa86f99f79faacaa868cdf31f4b2fc13a757dc64960cd771c6c4ff8b0",
	            "SandboxKey": "/var/run/docker/netns/256e4f9aa86f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34629"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34630"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34633"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34631"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34632"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-255023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:22:4d:72:1b:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba784dbb0bf675265a222a2ccbfc260249ee6464ab188d5ef5e9194204ab459f",
	                    "EndpointID": "bb7e6178d0c584a363e69f7c998efcccf04a6debdd8cca59ecd1f85a3daebffe",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-255023",
	                        "9e19dbb9154c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023: exit status 6 (354.197833ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:02:47.295448 2076054 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-255023 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-580645 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ delete  │ -p old-k8s-version-580645                                                                                                                                                                                                                                  │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ delete  │ -p old-k8s-version-580645                                                                                                                                                                                                                                  │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-092028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ stop    │ -p embed-certs-092028 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-092028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:58 UTC │
	│ image   │ embed-certs-092028 image list --format=json                                                                                                                                                                                                                │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ pause   │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ unpause │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:01:40
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:01:40.358627 2073073 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:01:40.358771 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.358784 2073073 out.go:374] Setting ErrFile to fd 2...
	I1216 04:01:40.358790 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.359119 2073073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:01:40.359639 2073073 out.go:368] Setting JSON to false
	I1216 04:01:40.360571 2073073 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35045,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:01:40.360643 2073073 start.go:143] virtualization:  
	I1216 04:01:40.364536 2073073 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:01:40.367700 2073073 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:01:40.367780 2073073 notify.go:221] Checking for updates...
	I1216 04:01:40.374179 2073073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:01:40.377177 2073073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:01:40.380122 2073073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:01:40.382984 2073073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:01:40.385825 2073073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:01:40.389346 2073073 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:40.389442 2073073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:01:40.423035 2073073 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:01:40.423253 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.478281 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.468443485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.478389 2073073 docker.go:319] overlay module found
	I1216 04:01:40.481589 2073073 out.go:179] * Using the docker driver based on user configuration
	I1216 04:01:40.484342 2073073 start.go:309] selected driver: docker
	I1216 04:01:40.484360 2073073 start.go:927] validating driver "docker" against <nil>
	I1216 04:01:40.484390 2073073 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:01:40.485138 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.540618 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.531037075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.540793 2073073 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1216 04:01:40.540832 2073073 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 04:01:40.541056 2073073 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:01:40.544109 2073073 out.go:179] * Using Docker driver with root privileges
	I1216 04:01:40.546924 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:40.547001 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:40.547019 2073073 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:01:40.547159 2073073 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:40.552090 2073073 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:01:40.554928 2073073 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:01:40.557867 2073073 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:01:40.560695 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:40.560741 2073073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:01:40.560753 2073073 cache.go:65] Caching tarball of preloaded images
	I1216 04:01:40.560789 2073073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:01:40.560857 2073073 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:01:40.560868 2073073 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:01:40.560979 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:40.560997 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json: {Name:mkec760556e6c51ee205092e94b87aaba5f75b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:40.580559 2073073 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:01:40.580583 2073073 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:01:40.580603 2073073 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:01:40.580637 2073073 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:01:40.580748 2073073 start.go:364] duration metric: took 89.631µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:01:40.580779 2073073 start.go:93] Provisioning new machine with config: &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:01:40.580854 2073073 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:01:40.584420 2073073 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 04:01:40.584656 2073073 start.go:159] libmachine.API.Create for "newest-cni-450938" (driver="docker")
	I1216 04:01:40.584695 2073073 client.go:173] LocalClient.Create starting
	I1216 04:01:40.584764 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 04:01:40.584813 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584835 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.584892 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 04:01:40.584915 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584931 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.585306 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:01:40.601358 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:01:40.601446 2073073 network_create.go:284] running [docker network inspect newest-cni-450938] to gather additional debugging logs...
	I1216 04:01:40.601465 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938
	W1216 04:01:40.616984 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 returned with exit code 1
	I1216 04:01:40.617014 2073073 network_create.go:287] error running [docker network inspect newest-cni-450938]: docker network inspect newest-cni-450938: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-450938 not found
	I1216 04:01:40.617029 2073073 network_create.go:289] output of [docker network inspect newest-cni-450938]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-450938 not found
	
	** /stderr **
	I1216 04:01:40.617127 2073073 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:40.633949 2073073 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
	I1216 04:01:40.634331 2073073 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d705cdcdbc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:12:e3:47:7f:d3} reservation:<nil>}
	I1216 04:01:40.634582 2073073 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9eafaf3b4a19 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e2:6e:50:29:6c:d7} reservation:<nil>}
	I1216 04:01:40.635035 2073073 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a049e0}
	I1216 04:01:40.635083 2073073 network_create.go:124] attempt to create docker network newest-cni-450938 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 04:01:40.635147 2073073 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-450938 newest-cni-450938
	I1216 04:01:40.694811 2073073 network_create.go:108] docker network newest-cni-450938 192.168.76.0/24 created
	I1216 04:01:40.694847 2073073 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-450938" container
	I1216 04:01:40.694937 2073073 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:01:40.711581 2073073 cli_runner.go:164] Run: docker volume create newest-cni-450938 --label name.minikube.sigs.k8s.io=newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:01:40.729628 2073073 oci.go:103] Successfully created a docker volume newest-cni-450938
	I1216 04:01:40.729716 2073073 cli_runner.go:164] Run: docker run --rm --name newest-cni-450938-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --entrypoint /usr/bin/test -v newest-cni-450938:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:01:41.281285 2073073 oci.go:107] Successfully prepared a docker volume newest-cni-450938
	I1216 04:01:41.281356 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:41.281367 2073073 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:01:41.281445 2073073 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:01:45.222137 2073073 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.940644385s)
	I1216 04:01:45.222178 2073073 kic.go:203] duration metric: took 3.94080544s to extract preloaded images to volume ...
	W1216 04:01:45.222367 2073073 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:01:45.222487 2073073 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:01:45.304396 2073073 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-450938 --name newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-450938 --network newest-cni-450938 --ip 192.168.76.2 --volume newest-cni-450938:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:01:45.622538 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Running}}
	I1216 04:01:45.645583 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:45.670160 2073073 cli_runner.go:164] Run: docker exec newest-cni-450938 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:01:45.723975 2073073 oci.go:144] the created container "newest-cni-450938" has a running status.
	I1216 04:01:45.724003 2073073 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa...
	I1216 04:01:46.267889 2073073 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:01:46.287596 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.304458 2073073 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:01:46.304481 2073073 kic_runner.go:114] Args: [docker exec --privileged newest-cni-450938 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:01:46.342352 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.358427 2073073 machine.go:94] provisionDockerMachine start ...
	I1216 04:01:46.358587 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:46.375890 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:46.376242 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:46.376257 2073073 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:01:46.376910 2073073 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:01:49.515224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.515347 2073073 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:01:49.515465 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.535848 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.536182 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.536201 2073073 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:01:49.686017 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.686121 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.708351 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.708676 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.708700 2073073 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:01:49.847224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:01:49.847257 2073073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:01:49.847275 2073073 ubuntu.go:190] setting up certificates
	I1216 04:01:49.847284 2073073 provision.go:84] configureAuth start
	I1216 04:01:49.847343 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:49.866142 2073073 provision.go:143] copyHostCerts
	I1216 04:01:49.866218 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:01:49.866228 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:01:49.866302 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:01:49.866395 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:01:49.866400 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:01:49.866426 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:01:49.866481 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:01:49.866486 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:01:49.866507 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:01:49.866552 2073073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:01:50.260935 2073073 provision.go:177] copyRemoteCerts
	I1216 04:01:50.261010 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:01:50.261061 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.278254 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.374622 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:01:50.392129 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:01:50.409364 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:01:50.427428 2073073 provision.go:87] duration metric: took 580.130211ms to configureAuth
	I1216 04:01:50.427478 2073073 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:01:50.427668 2073073 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:50.427681 2073073 machine.go:97] duration metric: took 4.069230888s to provisionDockerMachine
	I1216 04:01:50.427689 2073073 client.go:176] duration metric: took 9.842984311s to LocalClient.Create
	I1216 04:01:50.427703 2073073 start.go:167] duration metric: took 9.843048588s to libmachine.API.Create "newest-cni-450938"
	I1216 04:01:50.427714 2073073 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:01:50.427724 2073073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:01:50.427814 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:01:50.427858 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.444571 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.543256 2073073 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:01:50.546463 2073073 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:01:50.546490 2073073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:01:50.546502 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:01:50.546555 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:01:50.546641 2073073 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:01:50.546744 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:01:50.554130 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:50.571399 2073073 start.go:296] duration metric: took 143.669232ms for postStartSetup
	I1216 04:01:50.571809 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.589075 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:50.589367 2073073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:01:50.589424 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.606538 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.701772 2073073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:01:50.711675 2073073 start.go:128] duration metric: took 10.130806483s to createHost
	I1216 04:01:50.711705 2073073 start.go:83] releasing machines lock for "newest-cni-450938", held for 10.130943333s
	I1216 04:01:50.711776 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.732853 2073073 ssh_runner.go:195] Run: cat /version.json
	I1216 04:01:50.732921 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.733181 2073073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:01:50.733238 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.768130 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.773572 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.871263 2073073 ssh_runner.go:195] Run: systemctl --version
	I1216 04:01:50.965373 2073073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:01:50.969981 2073073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:01:50.970086 2073073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:01:51.000188 2073073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:01:51.000222 2073073 start.go:496] detecting cgroup driver to use...
	I1216 04:01:51.000256 2073073 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:01:51.000314 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:01:51.019286 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:01:51.033299 2073073 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:01:51.033403 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:01:51.051418 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:01:51.070273 2073073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:01:51.194121 2073073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:01:51.311613 2073073 docker.go:234] disabling docker service ...
	I1216 04:01:51.311729 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:01:51.333815 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:01:51.346480 2073073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:01:51.470333 2073073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:01:51.603299 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:01:51.616625 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:01:51.630599 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:01:51.640005 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:01:51.649178 2073073 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:01:51.649257 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:01:51.658373 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.667673 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:01:51.676660 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.685480 2073073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:01:51.694285 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:01:51.703488 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:01:51.712372 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:01:51.721367 2073073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:01:51.729097 2073073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:01:51.736893 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:51.844375 2073073 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:01:51.993981 2073073 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:01:51.994107 2073073 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:01:51.998351 2073073 start.go:564] Will wait 60s for crictl version
	I1216 04:01:51.998465 2073073 ssh_runner.go:195] Run: which crictl
	I1216 04:01:52.005463 2073073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:01:52.032896 2073073 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:01:52.032981 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.059717 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.085644 2073073 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:01:52.088617 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:52.106258 2073073 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:01:52.110161 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.122893 2073073 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:01:52.125844 2073073 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:01:52.126001 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:52.126091 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.152470 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.152498 2073073 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:01:52.152563 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.176896 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.176919 2073073 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:01:52.176928 2073073 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:01:52.177016 2073073 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:01:52.177086 2073073 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:01:52.218042 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:52.218071 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:52.218119 2073073 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:01:52.218150 2073073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:01:52.218321 2073073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:01:52.218398 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:01:52.230127 2073073 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:01:52.230208 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:01:52.239812 2073073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:01:52.255679 2073073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:01:52.270419 2073073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:01:52.284034 2073073 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:01:52.287803 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.297256 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:52.412361 2073073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:01:52.428888 2073073 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:01:52.428959 2073073 certs.go:195] generating shared ca certs ...
	I1216 04:01:52.428991 2073073 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.429192 2073073 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:01:52.429285 2073073 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:01:52.429319 2073073 certs.go:257] generating profile certs ...
	I1216 04:01:52.429409 2073073 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:01:52.429451 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt with IP's: []
	I1216 04:01:52.591834 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt ...
	I1216 04:01:52.591928 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt: {Name:mk7778fd64a4e46926332e38f467016f166dd4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592375 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key ...
	I1216 04:01:52.592423 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key: {Name:mk64ab6c72a270d4e474bc857c4508cc11c704c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592850 2073073 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:01:52.592903 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 04:01:52.672242 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c ...
	I1216 04:01:52.672287 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c: {Name:mk3c094233344d156b233623b9dbfae4496ab12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672537 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c ...
	I1216 04:01:52.672554 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c: {Name:mke958b63de0c9e687b9653a66eec1e3497a17af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672658 2073073 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt
	I1216 04:01:52.672758 2073073 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key
	I1216 04:01:52.672837 2073073 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:01:52.672864 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt with IP's: []
	I1216 04:01:53.025120 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt ...
	I1216 04:01:53.025154 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt: {Name:mkca565dc28355ccf88123a839d9cc0986e3f757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025346 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key ...
	I1216 04:01:53.025361 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key: {Name:mkb5acdd577d99db642b84842da90293bb2494a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025563 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:01:53.025610 2073073 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:01:53.025625 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:01:53.025652 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:01:53.025681 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:01:53.025711 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:01:53.025764 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:53.026345 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:01:53.047604 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:01:53.067106 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:01:53.086400 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:01:53.106958 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:01:53.125852 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:01:53.144046 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:01:53.162443 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:01:53.180617 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:01:53.202914 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:01:53.228308 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:01:53.254030 2073073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:01:53.268427 2073073 ssh_runner.go:195] Run: openssl version
	I1216 04:01:53.275148 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.283060 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:01:53.291347 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295430 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295543 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.338110 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:01:53.345692 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 04:01:53.354101 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.361981 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:01:53.369807 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.373913 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.374034 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.415192 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.422756 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.430342 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.438151 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:01:53.446120 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450114 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450180 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.491333 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:01:53.498914 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:01:53.506771 2073073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:01:53.510538 2073073 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:01:53.510593 2073073 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:53.510681 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:01:53.510746 2073073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:01:53.537047 2073073 cri.go:89] found id: ""
	I1216 04:01:53.537176 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:01:53.545264 2073073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:01:53.553401 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:01:53.553502 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:01:53.561504 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:01:53.561527 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:01:53.561581 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:01:53.569732 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:01:53.569844 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:01:53.577622 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:01:53.585671 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:01:53.585743 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:01:53.593272 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.601710 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:01:53.601791 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.609698 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:01:53.617871 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:01:53.617953 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:01:53.625500 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:01:53.665591 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:01:53.665653 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:01:53.769108 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:01:53.769186 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:01:53.769232 2073073 kubeadm.go:319] OS: Linux
	I1216 04:01:53.769281 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:01:53.769333 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:01:53.769384 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:01:53.769436 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:01:53.769489 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:01:53.769544 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:01:53.769592 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:01:53.769644 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:01:53.769694 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:01:53.843812 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:01:53.843931 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:01:53.844032 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:01:53.849932 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:01:53.856727 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:01:53.856901 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:01:53.857012 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:01:54.280084 2073073 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:01:54.512481 2073073 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:01:55.160883 2073073 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:01:55.382188 2073073 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:01:55.675582 2073073 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:01:55.675752 2073073 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:55.934138 2073073 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:01:55.934424 2073073 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:56.047522 2073073 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:01:56.247778 2073073 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:01:56.462583 2073073 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:01:56.462916 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:01:56.695545 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:01:56.807074 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:01:56.888027 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:01:57.401338 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:01:57.476073 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:01:57.476371 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:01:57.480701 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:01:57.484671 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:01:57.484788 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:01:57.484862 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:01:57.485264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:01:57.510728 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:01:57.510840 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:01:57.520316 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:01:57.521983 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:01:57.522283 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:01:57.655090 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:01:57.655217 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:02:44.807737 2047247 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00116118s
	I1216 04:02:44.807769 2047247 kubeadm.go:319] 
	I1216 04:02:44.807828 2047247 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:02:44.807861 2047247 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:02:44.808332 2047247 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:02:44.808350 2047247 kubeadm.go:319] 
	I1216 04:02:44.808601 2047247 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:02:44.808660 2047247 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:02:44.809013 2047247 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:02:44.809031 2047247 kubeadm.go:319] 
	I1216 04:02:44.815240 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:02:44.815746 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:02:44.815895 2047247 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:02:44.816168 2047247 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:02:44.816181 2047247 kubeadm.go:319] 
	I1216 04:02:44.816298 2047247 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:02:44.816332 2047247 kubeadm.go:403] duration metric: took 8m6.942382888s to StartCluster
	I1216 04:02:44.816370 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:02:44.816433 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:02:44.842115 2047247 cri.go:89] found id: ""
	I1216 04:02:44.842201 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.842224 2047247 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:02:44.842244 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:02:44.842323 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:02:44.871535 2047247 cri.go:89] found id: ""
	I1216 04:02:44.871561 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.871575 2047247 logs.go:284] No container was found matching "etcd"
	I1216 04:02:44.871582 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:02:44.871639 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:02:44.895425 2047247 cri.go:89] found id: ""
	I1216 04:02:44.895448 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.895456 2047247 logs.go:284] No container was found matching "coredns"
	I1216 04:02:44.895462 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:02:44.895526 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:02:44.919895 2047247 cri.go:89] found id: ""
	I1216 04:02:44.919921 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.919930 2047247 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:02:44.919937 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:02:44.920004 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:02:44.950798 2047247 cri.go:89] found id: ""
	I1216 04:02:44.950826 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.950835 2047247 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:02:44.950841 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:02:44.950901 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:02:44.984134 2047247 cri.go:89] found id: ""
	I1216 04:02:44.984161 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.984170 2047247 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:02:44.984177 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:02:44.984238 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:02:45.019779 2047247 cri.go:89] found id: ""
	I1216 04:02:45.019872 2047247 logs.go:282] 0 containers: []
	W1216 04:02:45.019899 2047247 logs.go:284] No container was found matching "kindnet"
	I1216 04:02:45.019923 2047247 logs.go:123] Gathering logs for container status ...
	I1216 04:02:45.019972 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:02:45.069300 2047247 logs.go:123] Gathering logs for kubelet ...
	I1216 04:02:45.069346 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:02:45.143191 2047247 logs.go:123] Gathering logs for dmesg ...
	I1216 04:02:45.143236 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:02:45.166359 2047247 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:02:45.166399 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:02:45.288271 2047247 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:02:45.288296 2047247 logs.go:123] Gathering logs for containerd ...
	I1216 04:02:45.288311 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1216 04:02:45.336518 2047247 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:02:45.336599 2047247 out.go:285] * 
	W1216 04:02:45.336658 2047247 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.336677 2047247 out.go:285] * 
	W1216 04:02:45.341134 2047247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:02:45.347800 2047247 out.go:203] 
	W1216 04:02:45.350811 2047247 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.351828 2047247 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:02:45.351862 2047247 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:02:45.356103 2047247 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 03:54:26 no-preload-255023 containerd[758]: time="2025-12-16T03:54:26.067986383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.567349921Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.569948194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.588851575Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.596155920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.015464986Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.018332545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.026775300Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.027787428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.675775645Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.676978497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.686002604Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.686924289Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.808092869Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.810368902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.821471844Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.822069729Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.124780620Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.126168452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.131232825Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.133433035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.474548837Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.476933667Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.486012944Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.486755540Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:02:47.913610    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:47.913992    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:47.915564    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:47.916055    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:47.917559    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:02:47 up  9:45,  0 user,  load average: 1.13, 1.85, 2.04
	Linux no-preload-255023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:02:44 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:45 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 04:02:45 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:45 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:45 no-preload-255023 kubelet[5462]: E1216 04:02:45.498243    5462 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:45 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:45 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:46 no-preload-255023 kubelet[5510]: E1216 04:02:46.259576    5510 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:47 no-preload-255023 kubelet[5595]: E1216 04:02:47.008736    5595 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 16 04:02:47 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:47 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:47 no-preload-255023 kubelet[5650]: E1216 04:02:47.748655    5650 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 6 (322.433274ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:02:48.359086 2076279 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-255023
helpers_test.go:244: (dbg) docker inspect no-preload-255023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	        "Created": "2025-12-16T03:54:15.810217174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2047579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:54:15.877443945Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e-json.log",
	        "Name": "/no-preload-255023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-255023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-255023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	                "LowerDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-255023",
	                "Source": "/var/lib/docker/volumes/no-preload-255023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-255023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-255023",
	                "name.minikube.sigs.k8s.io": "no-preload-255023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "256e4f9aa86f99f79faacaa868cdf31f4b2fc13a757dc64960cd771c6c4ff8b0",
	            "SandboxKey": "/var/run/docker/netns/256e4f9aa86f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34629"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34630"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34633"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34631"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34632"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-255023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:22:4d:72:1b:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba784dbb0bf675265a222a2ccbfc260249ee6464ab188d5ef5e9194204ab459f",
	                    "EndpointID": "bb7e6178d0c584a363e69f7c998efcccf04a6debdd8cca59ecd1f85a3daebffe",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-255023",
	                        "9e19dbb9154c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023: exit status 6 (349.78362ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:02:48.728654 2076355 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-255023 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-580645 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ delete  │ -p old-k8s-version-580645                                                                                                                                                                                                                                  │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ delete  │ -p old-k8s-version-580645                                                                                                                                                                                                                                  │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-092028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ stop    │ -p embed-certs-092028 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-092028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:58 UTC │
	│ image   │ embed-certs-092028 image list --format=json                                                                                                                                                                                                                │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ pause   │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ unpause │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:01:40
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:01:40.358627 2073073 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:01:40.358771 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.358784 2073073 out.go:374] Setting ErrFile to fd 2...
	I1216 04:01:40.358790 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.359119 2073073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:01:40.359639 2073073 out.go:368] Setting JSON to false
	I1216 04:01:40.360571 2073073 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35045,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:01:40.360643 2073073 start.go:143] virtualization:  
	I1216 04:01:40.364536 2073073 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:01:40.367700 2073073 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:01:40.367780 2073073 notify.go:221] Checking for updates...
	I1216 04:01:40.374179 2073073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:01:40.377177 2073073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:01:40.380122 2073073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:01:40.382984 2073073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:01:40.385825 2073073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:01:40.389346 2073073 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:40.389442 2073073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:01:40.423035 2073073 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:01:40.423253 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.478281 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.468443485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.478389 2073073 docker.go:319] overlay module found
	I1216 04:01:40.481589 2073073 out.go:179] * Using the docker driver based on user configuration
	I1216 04:01:40.484342 2073073 start.go:309] selected driver: docker
	I1216 04:01:40.484360 2073073 start.go:927] validating driver "docker" against <nil>
	I1216 04:01:40.484390 2073073 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:01:40.485138 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.540618 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.531037075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.540793 2073073 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1216 04:01:40.540832 2073073 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 04:01:40.541056 2073073 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:01:40.544109 2073073 out.go:179] * Using Docker driver with root privileges
	I1216 04:01:40.546924 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:40.547001 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:40.547019 2073073 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:01:40.547159 2073073 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:40.552090 2073073 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:01:40.554928 2073073 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:01:40.557867 2073073 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:01:40.560695 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:40.560741 2073073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:01:40.560753 2073073 cache.go:65] Caching tarball of preloaded images
	I1216 04:01:40.560789 2073073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:01:40.560857 2073073 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:01:40.560868 2073073 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:01:40.560979 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:40.560997 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json: {Name:mkec760556e6c51ee205092e94b87aaba5f75b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:40.580559 2073073 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:01:40.580583 2073073 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:01:40.580603 2073073 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:01:40.580637 2073073 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:01:40.580748 2073073 start.go:364] duration metric: took 89.631µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:01:40.580779 2073073 start.go:93] Provisioning new machine with config: &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:01:40.580854 2073073 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:01:40.584420 2073073 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 04:01:40.584656 2073073 start.go:159] libmachine.API.Create for "newest-cni-450938" (driver="docker")
	I1216 04:01:40.584695 2073073 client.go:173] LocalClient.Create starting
	I1216 04:01:40.584764 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 04:01:40.584813 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584835 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.584892 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 04:01:40.584915 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584931 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.585306 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:01:40.601358 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:01:40.601446 2073073 network_create.go:284] running [docker network inspect newest-cni-450938] to gather additional debugging logs...
	I1216 04:01:40.601465 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938
	W1216 04:01:40.616984 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 returned with exit code 1
	I1216 04:01:40.617014 2073073 network_create.go:287] error running [docker network inspect newest-cni-450938]: docker network inspect newest-cni-450938: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-450938 not found
	I1216 04:01:40.617029 2073073 network_create.go:289] output of [docker network inspect newest-cni-450938]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-450938 not found
	
	** /stderr **
	I1216 04:01:40.617127 2073073 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:40.633949 2073073 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
	I1216 04:01:40.634331 2073073 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d705cdcdbc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:12:e3:47:7f:d3} reservation:<nil>}
	I1216 04:01:40.634582 2073073 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9eafaf3b4a19 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e2:6e:50:29:6c:d7} reservation:<nil>}
	I1216 04:01:40.635035 2073073 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a049e0}
	I1216 04:01:40.635083 2073073 network_create.go:124] attempt to create docker network newest-cni-450938 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 04:01:40.635147 2073073 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-450938 newest-cni-450938
	I1216 04:01:40.694811 2073073 network_create.go:108] docker network newest-cni-450938 192.168.76.0/24 created
	I1216 04:01:40.694847 2073073 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-450938" container
	I1216 04:01:40.694937 2073073 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:01:40.711581 2073073 cli_runner.go:164] Run: docker volume create newest-cni-450938 --label name.minikube.sigs.k8s.io=newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:01:40.729628 2073073 oci.go:103] Successfully created a docker volume newest-cni-450938
	I1216 04:01:40.729716 2073073 cli_runner.go:164] Run: docker run --rm --name newest-cni-450938-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --entrypoint /usr/bin/test -v newest-cni-450938:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:01:41.281285 2073073 oci.go:107] Successfully prepared a docker volume newest-cni-450938
	I1216 04:01:41.281356 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:41.281367 2073073 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:01:41.281445 2073073 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:01:45.222137 2073073 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.940644385s)
	I1216 04:01:45.222178 2073073 kic.go:203] duration metric: took 3.94080544s to extract preloaded images to volume ...
	W1216 04:01:45.222367 2073073 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:01:45.222487 2073073 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:01:45.304396 2073073 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-450938 --name newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-450938 --network newest-cni-450938 --ip 192.168.76.2 --volume newest-cni-450938:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:01:45.622538 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Running}}
	I1216 04:01:45.645583 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:45.670160 2073073 cli_runner.go:164] Run: docker exec newest-cni-450938 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:01:45.723975 2073073 oci.go:144] the created container "newest-cni-450938" has a running status.
	I1216 04:01:45.724003 2073073 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa...
	I1216 04:01:46.267889 2073073 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:01:46.287596 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.304458 2073073 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:01:46.304481 2073073 kic_runner.go:114] Args: [docker exec --privileged newest-cni-450938 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:01:46.342352 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.358427 2073073 machine.go:94] provisionDockerMachine start ...
	I1216 04:01:46.358587 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:46.375890 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:46.376242 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:46.376257 2073073 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:01:46.376910 2073073 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:01:49.515224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.515347 2073073 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:01:49.515465 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.535848 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.536182 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.536201 2073073 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:01:49.686017 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.686121 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.708351 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.708676 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.708700 2073073 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:01:49.847224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:01:49.847257 2073073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:01:49.847275 2073073 ubuntu.go:190] setting up certificates
	I1216 04:01:49.847284 2073073 provision.go:84] configureAuth start
	I1216 04:01:49.847343 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:49.866142 2073073 provision.go:143] copyHostCerts
	I1216 04:01:49.866218 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:01:49.866228 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:01:49.866302 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:01:49.866395 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:01:49.866400 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:01:49.866426 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:01:49.866481 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:01:49.866486 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:01:49.866507 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:01:49.866552 2073073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:01:50.260935 2073073 provision.go:177] copyRemoteCerts
	I1216 04:01:50.261010 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:01:50.261061 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.278254 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.374622 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:01:50.392129 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:01:50.409364 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:01:50.427428 2073073 provision.go:87] duration metric: took 580.130211ms to configureAuth
	I1216 04:01:50.427478 2073073 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:01:50.427668 2073073 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:50.427681 2073073 machine.go:97] duration metric: took 4.069230888s to provisionDockerMachine
	I1216 04:01:50.427689 2073073 client.go:176] duration metric: took 9.842984311s to LocalClient.Create
	I1216 04:01:50.427703 2073073 start.go:167] duration metric: took 9.843048588s to libmachine.API.Create "newest-cni-450938"
	I1216 04:01:50.427714 2073073 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:01:50.427724 2073073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:01:50.427814 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:01:50.427858 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.444571 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.543256 2073073 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:01:50.546463 2073073 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:01:50.546490 2073073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:01:50.546502 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:01:50.546555 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:01:50.546641 2073073 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:01:50.546744 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:01:50.554130 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:50.571399 2073073 start.go:296] duration metric: took 143.669232ms for postStartSetup
	I1216 04:01:50.571809 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.589075 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:50.589367 2073073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:01:50.589424 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.606538 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.701772 2073073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:01:50.711675 2073073 start.go:128] duration metric: took 10.130806483s to createHost
	I1216 04:01:50.711705 2073073 start.go:83] releasing machines lock for "newest-cni-450938", held for 10.130943333s
	I1216 04:01:50.711776 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.732853 2073073 ssh_runner.go:195] Run: cat /version.json
	I1216 04:01:50.732921 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.733181 2073073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:01:50.733238 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.768130 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.773572 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.871263 2073073 ssh_runner.go:195] Run: systemctl --version
	I1216 04:01:50.965373 2073073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:01:50.969981 2073073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:01:50.970086 2073073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:01:51.000188 2073073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:01:51.000222 2073073 start.go:496] detecting cgroup driver to use...
	I1216 04:01:51.000256 2073073 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:01:51.000314 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:01:51.019286 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:01:51.033299 2073073 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:01:51.033403 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:01:51.051418 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:01:51.070273 2073073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:01:51.194121 2073073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:01:51.311613 2073073 docker.go:234] disabling docker service ...
	I1216 04:01:51.311729 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:01:51.333815 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:01:51.346480 2073073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:01:51.470333 2073073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:01:51.603299 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:01:51.616625 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:01:51.630599 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:01:51.640005 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:01:51.649178 2073073 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:01:51.649257 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:01:51.658373 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.667673 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:01:51.676660 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.685480 2073073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:01:51.694285 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:01:51.703488 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:01:51.712372 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:01:51.721367 2073073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:01:51.729097 2073073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:01:51.736893 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:51.844375 2073073 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:01:51.993981 2073073 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:01:51.994107 2073073 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:01:51.998351 2073073 start.go:564] Will wait 60s for crictl version
	I1216 04:01:51.998465 2073073 ssh_runner.go:195] Run: which crictl
	I1216 04:01:52.005463 2073073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:01:52.032896 2073073 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:01:52.032981 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.059717 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.085644 2073073 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:01:52.088617 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:52.106258 2073073 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:01:52.110161 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.122893 2073073 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:01:52.125844 2073073 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:01:52.126001 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:52.126091 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.152470 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.152498 2073073 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:01:52.152563 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.176896 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.176919 2073073 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:01:52.176928 2073073 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:01:52.177016 2073073 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:01:52.177086 2073073 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:01:52.218042 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:52.218071 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:52.218119 2073073 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:01:52.218150 2073073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:01:52.218321 2073073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:01:52.218398 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:01:52.230127 2073073 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:01:52.230208 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:01:52.239812 2073073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:01:52.255679 2073073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:01:52.270419 2073073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:01:52.284034 2073073 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:01:52.287803 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.297256 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:52.412361 2073073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:01:52.428888 2073073 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:01:52.428959 2073073 certs.go:195] generating shared ca certs ...
	I1216 04:01:52.428991 2073073 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.429192 2073073 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:01:52.429285 2073073 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:01:52.429319 2073073 certs.go:257] generating profile certs ...
	I1216 04:01:52.429409 2073073 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:01:52.429451 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt with IP's: []
	I1216 04:01:52.591834 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt ...
	I1216 04:01:52.591928 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt: {Name:mk7778fd64a4e46926332e38f467016f166dd4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592375 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key ...
	I1216 04:01:52.592423 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key: {Name:mk64ab6c72a270d4e474bc857c4508cc11c704c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592850 2073073 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:01:52.592903 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 04:01:52.672242 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c ...
	I1216 04:01:52.672287 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c: {Name:mk3c094233344d156b233623b9dbfae4496ab12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672537 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c ...
	I1216 04:01:52.672554 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c: {Name:mke958b63de0c9e687b9653a66eec1e3497a17af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672658 2073073 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt
	I1216 04:01:52.672758 2073073 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key
	I1216 04:01:52.672837 2073073 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:01:52.672864 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt with IP's: []
	I1216 04:01:53.025120 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt ...
	I1216 04:01:53.025154 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt: {Name:mkca565dc28355ccf88123a839d9cc0986e3f757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025346 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key ...
	I1216 04:01:53.025361 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key: {Name:mkb5acdd577d99db642b84842da90293bb2494a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025563 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:01:53.025610 2073073 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:01:53.025625 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:01:53.025652 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:01:53.025681 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:01:53.025711 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:01:53.025764 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:53.026345 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:01:53.047604 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:01:53.067106 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:01:53.086400 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:01:53.106958 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:01:53.125852 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:01:53.144046 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:01:53.162443 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:01:53.180617 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:01:53.202914 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:01:53.228308 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:01:53.254030 2073073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:01:53.268427 2073073 ssh_runner.go:195] Run: openssl version
	I1216 04:01:53.275148 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.283060 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:01:53.291347 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295430 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295543 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.338110 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:01:53.345692 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 04:01:53.354101 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.361981 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:01:53.369807 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.373913 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.374034 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.415192 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.422756 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.430342 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.438151 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:01:53.446120 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450114 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450180 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.491333 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:01:53.498914 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:01:53.506771 2073073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:01:53.510538 2073073 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:01:53.510593 2073073 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:53.510681 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:01:53.510746 2073073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:01:53.537047 2073073 cri.go:89] found id: ""
	I1216 04:01:53.537176 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:01:53.545264 2073073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:01:53.553401 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:01:53.553502 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:01:53.561504 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:01:53.561527 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:01:53.561581 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:01:53.569732 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:01:53.569844 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:01:53.577622 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:01:53.585671 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:01:53.585743 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:01:53.593272 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.601710 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:01:53.601791 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.609698 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:01:53.617871 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:01:53.617953 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:01:53.625500 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:01:53.665591 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:01:53.665653 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:01:53.769108 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:01:53.769186 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:01:53.769232 2073073 kubeadm.go:319] OS: Linux
	I1216 04:01:53.769281 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:01:53.769333 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:01:53.769384 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:01:53.769436 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:01:53.769489 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:01:53.769544 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:01:53.769592 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:01:53.769644 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:01:53.769694 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:01:53.843812 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:01:53.843931 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:01:53.844032 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:01:53.849932 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:01:53.856727 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:01:53.856901 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:01:53.857012 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:01:54.280084 2073073 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:01:54.512481 2073073 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:01:55.160883 2073073 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:01:55.382188 2073073 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:01:55.675582 2073073 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:01:55.675752 2073073 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:55.934138 2073073 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:01:55.934424 2073073 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:56.047522 2073073 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:01:56.247778 2073073 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:01:56.462583 2073073 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:01:56.462916 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:01:56.695545 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:01:56.807074 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:01:56.888027 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:01:57.401338 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:01:57.476073 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:01:57.476371 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:01:57.480701 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:01:57.484671 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:01:57.484788 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:01:57.484862 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:01:57.485264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:01:57.510728 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:01:57.510840 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:01:57.520316 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:01:57.521983 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:01:57.522283 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:01:57.655090 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:01:57.655217 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:02:44.807737 2047247 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00116118s
	I1216 04:02:44.807769 2047247 kubeadm.go:319] 
	I1216 04:02:44.807828 2047247 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:02:44.807861 2047247 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:02:44.808332 2047247 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:02:44.808350 2047247 kubeadm.go:319] 
	I1216 04:02:44.808601 2047247 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:02:44.808660 2047247 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:02:44.809013 2047247 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:02:44.809031 2047247 kubeadm.go:319] 
	I1216 04:02:44.815240 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:02:44.815746 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:02:44.815895 2047247 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:02:44.816168 2047247 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:02:44.816181 2047247 kubeadm.go:319] 
	I1216 04:02:44.816298 2047247 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:02:44.816332 2047247 kubeadm.go:403] duration metric: took 8m6.942382888s to StartCluster
	I1216 04:02:44.816370 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:02:44.816433 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:02:44.842115 2047247 cri.go:89] found id: ""
	I1216 04:02:44.842201 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.842224 2047247 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:02:44.842244 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:02:44.842323 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:02:44.871535 2047247 cri.go:89] found id: ""
	I1216 04:02:44.871561 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.871575 2047247 logs.go:284] No container was found matching "etcd"
	I1216 04:02:44.871582 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:02:44.871639 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:02:44.895425 2047247 cri.go:89] found id: ""
	I1216 04:02:44.895448 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.895456 2047247 logs.go:284] No container was found matching "coredns"
	I1216 04:02:44.895462 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:02:44.895526 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:02:44.919895 2047247 cri.go:89] found id: ""
	I1216 04:02:44.919921 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.919930 2047247 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:02:44.919937 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:02:44.920004 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:02:44.950798 2047247 cri.go:89] found id: ""
	I1216 04:02:44.950826 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.950835 2047247 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:02:44.950841 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:02:44.950901 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:02:44.984134 2047247 cri.go:89] found id: ""
	I1216 04:02:44.984161 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.984170 2047247 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:02:44.984177 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:02:44.984238 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:02:45.019779 2047247 cri.go:89] found id: ""
	I1216 04:02:45.019872 2047247 logs.go:282] 0 containers: []
	W1216 04:02:45.019899 2047247 logs.go:284] No container was found matching "kindnet"
	I1216 04:02:45.019923 2047247 logs.go:123] Gathering logs for container status ...
	I1216 04:02:45.019972 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:02:45.069300 2047247 logs.go:123] Gathering logs for kubelet ...
	I1216 04:02:45.069346 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:02:45.143191 2047247 logs.go:123] Gathering logs for dmesg ...
	I1216 04:02:45.143236 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:02:45.166359 2047247 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:02:45.166399 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:02:45.288271 2047247 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:02:45.288296 2047247 logs.go:123] Gathering logs for containerd ...
	I1216 04:02:45.288311 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1216 04:02:45.336518 2047247 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:02:45.336599 2047247 out.go:285] * 
	W1216 04:02:45.336658 2047247 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.336677 2047247 out.go:285] * 
	W1216 04:02:45.341134 2047247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:02:45.347800 2047247 out.go:203] 
	W1216 04:02:45.350811 2047247 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.351828 2047247 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:02:45.351862 2047247 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:02:45.356103 2047247 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 03:54:26 no-preload-255023 containerd[758]: time="2025-12-16T03:54:26.067986383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.567349921Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.569948194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.588851575Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.596155920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.015464986Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.018332545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.026775300Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.027787428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.675775645Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.676978497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.686002604Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.686924289Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.808092869Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.810368902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.821471844Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.822069729Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.124780620Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.126168452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.131232825Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.133433035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.474548837Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.476933667Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.486012944Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.486755540Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:02:49.365904    5835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:49.366313    5835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:49.367957    5835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:49.368453    5835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:49.370065    5835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:02:49 up  9:45,  0 user,  load average: 1.12, 1.84, 2.03
	Linux no-preload-255023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:46 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:47 no-preload-255023 kubelet[5595]: E1216 04:02:47.008736    5595 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 16 04:02:47 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:47 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:47 no-preload-255023 kubelet[5650]: E1216 04:02:47.748655    5650 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:47 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:48 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 16 04:02:48 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:48 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:48 no-preload-255023 kubelet[5728]: E1216 04:02:48.497575    5728 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:48 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:48 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:02:49 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 16 04:02:49 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:49 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:02:49 no-preload-255023 kubelet[5803]: E1216 04:02:49.255871    5803 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:02:49 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:02:49 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 6 (330.201898ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:02:49.824697 2076592 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (104.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1216 04:03:27.929739 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:03:44.847078 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:03:51.134874 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m43.199472485s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-255023 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-255023 describe deploy/metrics-server -n kube-system: exit status 1 (59.952789ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-255023" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-255023 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-255023
helpers_test.go:244: (dbg) docker inspect no-preload-255023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	        "Created": "2025-12-16T03:54:15.810217174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2047579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T03:54:15.877443945Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e-json.log",
	        "Name": "/no-preload-255023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-255023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-255023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	                "LowerDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-255023",
	                "Source": "/var/lib/docker/volumes/no-preload-255023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-255023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-255023",
	                "name.minikube.sigs.k8s.io": "no-preload-255023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "256e4f9aa86f99f79faacaa868cdf31f4b2fc13a757dc64960cd771c6c4ff8b0",
	            "SandboxKey": "/var/run/docker/netns/256e4f9aa86f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34629"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34630"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34633"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34631"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34632"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-255023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:22:4d:72:1b:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba784dbb0bf675265a222a2ccbfc260249ee6464ab188d5ef5e9194204ab459f",
	                    "EndpointID": "bb7e6178d0c584a363e69f7c998efcccf04a6debdd8cca59ecd1f85a3daebffe",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-255023",
	                        "9e19dbb9154c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023: exit status 6 (335.096752ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:04:33.438975 2078368 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-255023 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-580645                                                                                                                                                                                                                                  │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ delete  │ -p old-k8s-version-580645                                                                                                                                                                                                                                  │ old-k8s-version-580645       │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:56 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:56 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-092028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ stop    │ -p embed-certs-092028 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-092028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:58 UTC │
	│ image   │ embed-certs-092028 image list --format=json                                                                                                                                                                                                                │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ pause   │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ unpause │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:01:40
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:01:40.358627 2073073 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:01:40.358771 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.358784 2073073 out.go:374] Setting ErrFile to fd 2...
	I1216 04:01:40.358790 2073073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:01:40.359119 2073073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:01:40.359639 2073073 out.go:368] Setting JSON to false
	I1216 04:01:40.360571 2073073 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35045,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:01:40.360643 2073073 start.go:143] virtualization:  
	I1216 04:01:40.364536 2073073 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:01:40.367700 2073073 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:01:40.367780 2073073 notify.go:221] Checking for updates...
	I1216 04:01:40.374179 2073073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:01:40.377177 2073073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:01:40.380122 2073073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:01:40.382984 2073073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:01:40.385825 2073073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:01:40.389346 2073073 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:40.389442 2073073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:01:40.423035 2073073 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:01:40.423253 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.478281 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.468443485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.478389 2073073 docker.go:319] overlay module found
	I1216 04:01:40.481589 2073073 out.go:179] * Using the docker driver based on user configuration
	I1216 04:01:40.484342 2073073 start.go:309] selected driver: docker
	I1216 04:01:40.484360 2073073 start.go:927] validating driver "docker" against <nil>
	I1216 04:01:40.484390 2073073 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:01:40.485138 2073073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:01:40.540618 2073073 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:01:40.531037075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:01:40.540793 2073073 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1216 04:01:40.540832 2073073 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 04:01:40.541056 2073073 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:01:40.544109 2073073 out.go:179] * Using Docker driver with root privileges
	I1216 04:01:40.546924 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:40.547001 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:40.547019 2073073 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:01:40.547159 2073073 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:40.552090 2073073 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:01:40.554928 2073073 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:01:40.557867 2073073 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:01:40.560695 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:40.560741 2073073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:01:40.560753 2073073 cache.go:65] Caching tarball of preloaded images
	I1216 04:01:40.560789 2073073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:01:40.560857 2073073 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:01:40.560868 2073073 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:01:40.560979 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:40.560997 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json: {Name:mkec760556e6c51ee205092e94b87aaba5f75b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:40.580559 2073073 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:01:40.580583 2073073 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:01:40.580603 2073073 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:01:40.580637 2073073 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:01:40.580748 2073073 start.go:364] duration metric: took 89.631µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:01:40.580779 2073073 start.go:93] Provisioning new machine with config: &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:01:40.580854 2073073 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:01:40.584420 2073073 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 04:01:40.584656 2073073 start.go:159] libmachine.API.Create for "newest-cni-450938" (driver="docker")
	I1216 04:01:40.584695 2073073 client.go:173] LocalClient.Create starting
	I1216 04:01:40.584764 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 04:01:40.584813 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584835 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.584892 2073073 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 04:01:40.584915 2073073 main.go:143] libmachine: Decoding PEM data...
	I1216 04:01:40.584931 2073073 main.go:143] libmachine: Parsing certificate...
	I1216 04:01:40.585306 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:01:40.601358 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:01:40.601446 2073073 network_create.go:284] running [docker network inspect newest-cni-450938] to gather additional debugging logs...
	I1216 04:01:40.601465 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938
	W1216 04:01:40.616984 2073073 cli_runner.go:211] docker network inspect newest-cni-450938 returned with exit code 1
	I1216 04:01:40.617014 2073073 network_create.go:287] error running [docker network inspect newest-cni-450938]: docker network inspect newest-cni-450938: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-450938 not found
	I1216 04:01:40.617029 2073073 network_create.go:289] output of [docker network inspect newest-cni-450938]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-450938 not found
	
	** /stderr **
	I1216 04:01:40.617127 2073073 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:40.633949 2073073 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
	I1216 04:01:40.634331 2073073 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d705cdcdbc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:12:e3:47:7f:d3} reservation:<nil>}
	I1216 04:01:40.634582 2073073 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9eafaf3b4a19 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e2:6e:50:29:6c:d7} reservation:<nil>}
	I1216 04:01:40.635035 2073073 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a049e0}
	I1216 04:01:40.635083 2073073 network_create.go:124] attempt to create docker network newest-cni-450938 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 04:01:40.635147 2073073 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-450938 newest-cni-450938
	I1216 04:01:40.694811 2073073 network_create.go:108] docker network newest-cni-450938 192.168.76.0/24 created
	I1216 04:01:40.694847 2073073 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-450938" container
	I1216 04:01:40.694937 2073073 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:01:40.711581 2073073 cli_runner.go:164] Run: docker volume create newest-cni-450938 --label name.minikube.sigs.k8s.io=newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:01:40.729628 2073073 oci.go:103] Successfully created a docker volume newest-cni-450938
	I1216 04:01:40.729716 2073073 cli_runner.go:164] Run: docker run --rm --name newest-cni-450938-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --entrypoint /usr/bin/test -v newest-cni-450938:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:01:41.281285 2073073 oci.go:107] Successfully prepared a docker volume newest-cni-450938
	I1216 04:01:41.281356 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:41.281367 2073073 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:01:41.281445 2073073 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:01:45.222137 2073073 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-450938:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.940644385s)
	I1216 04:01:45.222178 2073073 kic.go:203] duration metric: took 3.94080544s to extract preloaded images to volume ...
	W1216 04:01:45.222367 2073073 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:01:45.222487 2073073 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:01:45.304396 2073073 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-450938 --name newest-cni-450938 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-450938 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-450938 --network newest-cni-450938 --ip 192.168.76.2 --volume newest-cni-450938:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:01:45.622538 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Running}}
	I1216 04:01:45.645583 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:45.670160 2073073 cli_runner.go:164] Run: docker exec newest-cni-450938 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:01:45.723975 2073073 oci.go:144] the created container "newest-cni-450938" has a running status.
	I1216 04:01:45.724003 2073073 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa...
	I1216 04:01:46.267889 2073073 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:01:46.287596 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.304458 2073073 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:01:46.304481 2073073 kic_runner.go:114] Args: [docker exec --privileged newest-cni-450938 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:01:46.342352 2073073 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:01:46.358427 2073073 machine.go:94] provisionDockerMachine start ...
	I1216 04:01:46.358587 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:46.375890 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:46.376242 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:46.376257 2073073 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:01:46.376910 2073073 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:01:49.515224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.515347 2073073 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:01:49.515465 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.535848 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.536182 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.536201 2073073 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:01:49.686017 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:01:49.686121 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:49.708351 2073073 main.go:143] libmachine: Using SSH client type: native
	I1216 04:01:49.708676 2073073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34659 <nil> <nil>}
	I1216 04:01:49.708700 2073073 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:01:49.847224 2073073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:01:49.847257 2073073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:01:49.847275 2073073 ubuntu.go:190] setting up certificates
	I1216 04:01:49.847284 2073073 provision.go:84] configureAuth start
	I1216 04:01:49.847343 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:49.866142 2073073 provision.go:143] copyHostCerts
	I1216 04:01:49.866218 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:01:49.866228 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:01:49.866302 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:01:49.866395 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:01:49.866400 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:01:49.866426 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:01:49.866481 2073073 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:01:49.866486 2073073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:01:49.866507 2073073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:01:49.866552 2073073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:01:50.260935 2073073 provision.go:177] copyRemoteCerts
	I1216 04:01:50.261010 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:01:50.261061 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.278254 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.374622 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:01:50.392129 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:01:50.409364 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:01:50.427428 2073073 provision.go:87] duration metric: took 580.130211ms to configureAuth
	I1216 04:01:50.427478 2073073 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:01:50.427668 2073073 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:01:50.427681 2073073 machine.go:97] duration metric: took 4.069230888s to provisionDockerMachine
	I1216 04:01:50.427689 2073073 client.go:176] duration metric: took 9.842984311s to LocalClient.Create
	I1216 04:01:50.427703 2073073 start.go:167] duration metric: took 9.843048588s to libmachine.API.Create "newest-cni-450938"
	I1216 04:01:50.427714 2073073 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:01:50.427724 2073073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:01:50.427814 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:01:50.427858 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.444571 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.543256 2073073 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:01:50.546463 2073073 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:01:50.546490 2073073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:01:50.546502 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:01:50.546555 2073073 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:01:50.546641 2073073 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:01:50.546744 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:01:50.554130 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:50.571399 2073073 start.go:296] duration metric: took 143.669232ms for postStartSetup
	I1216 04:01:50.571809 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.589075 2073073 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:01:50.589367 2073073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:01:50.589424 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.606538 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.701772 2073073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:01:50.711675 2073073 start.go:128] duration metric: took 10.130806483s to createHost
	I1216 04:01:50.711705 2073073 start.go:83] releasing machines lock for "newest-cni-450938", held for 10.130943333s
	I1216 04:01:50.711776 2073073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:01:50.732853 2073073 ssh_runner.go:195] Run: cat /version.json
	I1216 04:01:50.732921 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.733181 2073073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:01:50.733238 2073073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:01:50.768130 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.773572 2073073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:01:50.871263 2073073 ssh_runner.go:195] Run: systemctl --version
	I1216 04:01:50.965373 2073073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:01:50.969981 2073073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:01:50.970086 2073073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:01:51.000188 2073073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:01:51.000222 2073073 start.go:496] detecting cgroup driver to use...
	I1216 04:01:51.000256 2073073 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:01:51.000314 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:01:51.019286 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:01:51.033299 2073073 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:01:51.033403 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:01:51.051418 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:01:51.070273 2073073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:01:51.194121 2073073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:01:51.311613 2073073 docker.go:234] disabling docker service ...
	I1216 04:01:51.311729 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:01:51.333815 2073073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:01:51.346480 2073073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:01:51.470333 2073073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:01:51.603299 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:01:51.616625 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:01:51.630599 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:01:51.640005 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:01:51.649178 2073073 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:01:51.649257 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:01:51.658373 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.667673 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:01:51.676660 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:01:51.685480 2073073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:01:51.694285 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:01:51.703488 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:01:51.712372 2073073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:01:51.721367 2073073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:01:51.729097 2073073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:01:51.736893 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:51.844375 2073073 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:01:51.993981 2073073 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:01:51.994107 2073073 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:01:51.998351 2073073 start.go:564] Will wait 60s for crictl version
	I1216 04:01:51.998465 2073073 ssh_runner.go:195] Run: which crictl
	I1216 04:01:52.005463 2073073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:01:52.032896 2073073 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:01:52.032981 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.059717 2073073 ssh_runner.go:195] Run: containerd --version
	I1216 04:01:52.085644 2073073 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:01:52.088617 2073073 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:01:52.106258 2073073 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:01:52.110161 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.122893 2073073 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:01:52.125844 2073073 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:01:52.126001 2073073 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:01:52.126091 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.152470 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.152498 2073073 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:01:52.152563 2073073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:01:52.176896 2073073 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:01:52.176919 2073073 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:01:52.176928 2073073 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:01:52.177016 2073073 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:01:52.177086 2073073 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:01:52.218042 2073073 cni.go:84] Creating CNI manager for ""
	I1216 04:01:52.218071 2073073 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:01:52.218119 2073073 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:01:52.218150 2073073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:01:52.218321 2073073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:01:52.218398 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:01:52.230127 2073073 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:01:52.230208 2073073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:01:52.239812 2073073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:01:52.255679 2073073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:01:52.270419 2073073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:01:52.284034 2073073 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:01:52.287803 2073073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:01:52.297256 2073073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:01:52.412361 2073073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:01:52.428888 2073073 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:01:52.428959 2073073 certs.go:195] generating shared ca certs ...
	I1216 04:01:52.428991 2073073 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.429192 2073073 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:01:52.429285 2073073 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:01:52.429319 2073073 certs.go:257] generating profile certs ...
	I1216 04:01:52.429409 2073073 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:01:52.429451 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt with IP's: []
	I1216 04:01:52.591834 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt ...
	I1216 04:01:52.591928 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.crt: {Name:mk7778fd64a4e46926332e38f467016f166dd4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592375 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key ...
	I1216 04:01:52.592423 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key: {Name:mk64ab6c72a270d4e474bc857c4508cc11c704c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.592850 2073073 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:01:52.592903 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 04:01:52.672242 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c ...
	I1216 04:01:52.672287 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c: {Name:mk3c094233344d156b233623b9dbfae4496ab12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672537 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c ...
	I1216 04:01:52.672554 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c: {Name:mke958b63de0c9e687b9653a66eec1e3497a17af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:52.672658 2073073 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt
	I1216 04:01:52.672758 2073073 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key
	I1216 04:01:52.672837 2073073 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:01:52.672864 2073073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt with IP's: []
	I1216 04:01:53.025120 2073073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt ...
	I1216 04:01:53.025154 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt: {Name:mkca565dc28355ccf88123a839d9cc0986e3f757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025346 2073073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key ...
	I1216 04:01:53.025361 2073073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key: {Name:mkb5acdd577d99db642b84842da90293bb2494a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:01:53.025563 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:01:53.025610 2073073 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:01:53.025625 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:01:53.025652 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:01:53.025681 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:01:53.025711 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:01:53.025764 2073073 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:01:53.026345 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:01:53.047604 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:01:53.067106 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:01:53.086400 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:01:53.106958 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:01:53.125852 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:01:53.144046 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:01:53.162443 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:01:53.180617 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:01:53.202914 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:01:53.228308 2073073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:01:53.254030 2073073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:01:53.268427 2073073 ssh_runner.go:195] Run: openssl version
	I1216 04:01:53.275148 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.283060 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:01:53.291347 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295430 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.295543 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:01:53.338110 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:01:53.345692 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 04:01:53.354101 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.361981 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:01:53.369807 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.373913 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.374034 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:01:53.415192 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.422756 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:01:53.430342 2073073 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.438151 2073073 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:01:53.446120 2073073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450114 2073073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.450180 2073073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:01:53.491333 2073073 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:01:53.498914 2073073 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:01:53.506771 2073073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:01:53.510538 2073073 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:01:53.510593 2073073 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:01:53.510681 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:01:53.510746 2073073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:01:53.537047 2073073 cri.go:89] found id: ""
	I1216 04:01:53.537176 2073073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:01:53.545264 2073073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:01:53.553401 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:01:53.553502 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:01:53.561504 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:01:53.561527 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:01:53.561581 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:01:53.569732 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:01:53.569844 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:01:53.577622 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:01:53.585671 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:01:53.585743 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:01:53.593272 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.601710 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:01:53.601791 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:01:53.609698 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:01:53.617871 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:01:53.617953 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:01:53.625500 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:01:53.665591 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:01:53.665653 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:01:53.769108 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:01:53.769186 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:01:53.769232 2073073 kubeadm.go:319] OS: Linux
	I1216 04:01:53.769281 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:01:53.769333 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:01:53.769384 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:01:53.769436 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:01:53.769489 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:01:53.769544 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:01:53.769592 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:01:53.769644 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:01:53.769694 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:01:53.843812 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:01:53.843931 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:01:53.844032 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:01:53.849932 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:01:53.856727 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:01:53.856901 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:01:53.857012 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:01:54.280084 2073073 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:01:54.512481 2073073 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:01:55.160883 2073073 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:01:55.382188 2073073 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:01:55.675582 2073073 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:01:55.675752 2073073 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:55.934138 2073073 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:01:55.934424 2073073 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:01:56.047522 2073073 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:01:56.247778 2073073 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:01:56.462583 2073073 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:01:56.462916 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:01:56.695545 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:01:56.807074 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:01:56.888027 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:01:57.401338 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:01:57.476073 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:01:57.476371 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:01:57.480701 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:01:57.484671 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:01:57.484788 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:01:57.484862 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:01:57.485264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:01:57.510728 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:01:57.510840 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:01:57.520316 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:01:57.521983 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:01:57.522283 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:01:57.655090 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:01:57.655217 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:02:44.807737 2047247 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00116118s
	I1216 04:02:44.807769 2047247 kubeadm.go:319] 
	I1216 04:02:44.807828 2047247 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:02:44.807861 2047247 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:02:44.808332 2047247 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:02:44.808350 2047247 kubeadm.go:319] 
	I1216 04:02:44.808601 2047247 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:02:44.808660 2047247 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:02:44.809013 2047247 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:02:44.809031 2047247 kubeadm.go:319] 
	I1216 04:02:44.815240 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:02:44.815746 2047247 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:02:44.815895 2047247 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:02:44.816168 2047247 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:02:44.816181 2047247 kubeadm.go:319] 
	I1216 04:02:44.816298 2047247 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:02:44.816332 2047247 kubeadm.go:403] duration metric: took 8m6.942382888s to StartCluster
	I1216 04:02:44.816370 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:02:44.816433 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:02:44.842115 2047247 cri.go:89] found id: ""
	I1216 04:02:44.842201 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.842224 2047247 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:02:44.842244 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:02:44.842323 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:02:44.871535 2047247 cri.go:89] found id: ""
	I1216 04:02:44.871561 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.871575 2047247 logs.go:284] No container was found matching "etcd"
	I1216 04:02:44.871582 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:02:44.871639 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:02:44.895425 2047247 cri.go:89] found id: ""
	I1216 04:02:44.895448 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.895456 2047247 logs.go:284] No container was found matching "coredns"
	I1216 04:02:44.895462 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:02:44.895526 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:02:44.919895 2047247 cri.go:89] found id: ""
	I1216 04:02:44.919921 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.919930 2047247 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:02:44.919937 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:02:44.920004 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:02:44.950798 2047247 cri.go:89] found id: ""
	I1216 04:02:44.950826 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.950835 2047247 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:02:44.950841 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:02:44.950901 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:02:44.984134 2047247 cri.go:89] found id: ""
	I1216 04:02:44.984161 2047247 logs.go:282] 0 containers: []
	W1216 04:02:44.984170 2047247 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:02:44.984177 2047247 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:02:44.984238 2047247 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:02:45.019779 2047247 cri.go:89] found id: ""
	I1216 04:02:45.019872 2047247 logs.go:282] 0 containers: []
	W1216 04:02:45.019899 2047247 logs.go:284] No container was found matching "kindnet"
	I1216 04:02:45.019923 2047247 logs.go:123] Gathering logs for container status ...
	I1216 04:02:45.019972 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:02:45.069300 2047247 logs.go:123] Gathering logs for kubelet ...
	I1216 04:02:45.069346 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:02:45.143191 2047247 logs.go:123] Gathering logs for dmesg ...
	I1216 04:02:45.143236 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:02:45.166359 2047247 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:02:45.166399 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:02:45.288271 2047247 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:02:45.277920    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.279623    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.281457    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.282172    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:02:45.283221    5455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:02:45.288296 2047247 logs.go:123] Gathering logs for containerd ...
	I1216 04:02:45.288311 2047247 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1216 04:02:45.336518 2047247 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:02:45.336599 2047247 out.go:285] * 
	W1216 04:02:45.336658 2047247 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.336677 2047247 out.go:285] * 
	W1216 04:02:45.341134 2047247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:02:45.347800 2047247 out.go:203] 
	W1216 04:02:45.350811 2047247 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00116118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:02:45.351828 2047247 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:02:45.351862 2047247 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:02:45.356103 2047247 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 03:54:26 no-preload-255023 containerd[758]: time="2025-12-16T03:54:26.067986383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.567349921Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.569948194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.588851575Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:27 no-preload-255023 containerd[758]: time="2025-12-16T03:54:27.596155920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.015464986Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.018332545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.026775300Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:29 no-preload-255023 containerd[758]: time="2025-12-16T03:54:29.027787428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.675775645Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.676978497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.686002604Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:30 no-preload-255023 containerd[758]: time="2025-12-16T03:54:30.686924289Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.808092869Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.810368902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.821471844Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:31 no-preload-255023 containerd[758]: time="2025-12-16T03:54:31.822069729Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.124780620Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.126168452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.131232825Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.133433035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.474548837Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.476933667Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.486012944Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 16 03:54:33 no-preload-255023 containerd[758]: time="2025-12-16T03:54:33.486755540Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:04:34.064915    6851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:04:34.065711    6851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:04:34.067999    6851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:04:34.068669    6851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:04:34.070249    6851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:04:34 up  9:46,  0 user,  load average: 0.61, 1.47, 1.88
	Linux no-preload-255023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:04:30 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:04:31 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 16 04:04:31 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:04:31 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:04:31 no-preload-255023 kubelet[6729]: E1216 04:04:31.241133    6729 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:04:31 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:04:31 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:04:31 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 16 04:04:31 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:04:31 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:04:31 no-preload-255023 kubelet[6734]: E1216 04:04:31.979510    6734 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:04:31 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:04:31 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:04:32 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 463.
	Dec 16 04:04:32 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:04:32 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:04:32 no-preload-255023 kubelet[6740]: E1216 04:04:32.739584    6740 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:04:32 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:04:32 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:04:33 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 464.
	Dec 16 04:04:33 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:04:33 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:04:33 no-preload-255023 kubelet[6764]: E1216 04:04:33.442966    6764 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:04:33 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:04:33 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 6 (358.179847ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:04:34.548499 2078594 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (104.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1216 04:04:44.437371 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:05.534457 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:05.540887 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:05.552406 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:05.573813 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:05.615367 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:05.696937 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:05.858816 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:06.180491 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:06.822515 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:08.103911 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:10.665352 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:12.144581 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:15.787316 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:26.028929 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:46.510531 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:05:51.130777 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:06:27.473659 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:07:49.395295 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:08:44.846706 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:08:51.133991 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:08:54.215938 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:09:44.437695 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.358595403s)

                                                
                                                
-- stdout --
	* [no-preload-255023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-255023" primary control-plane node in "no-preload-255023" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:04:36.142328 2078887 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:04:36.142562 2078887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:04:36.142588 2078887 out.go:374] Setting ErrFile to fd 2...
	I1216 04:04:36.142607 2078887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:04:36.142894 2078887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:04:36.143393 2078887 out.go:368] Setting JSON to false
	I1216 04:04:36.144368 2078887 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35221,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:04:36.144465 2078887 start.go:143] virtualization:  
	I1216 04:04:36.150070 2078887 out.go:179] * [no-preload-255023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:04:36.153020 2078887 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:04:36.153105 2078887 notify.go:221] Checking for updates...
	I1216 04:04:36.158759 2078887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:04:36.161685 2078887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:36.164397 2078887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:04:36.167148 2078887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:04:36.169926 2078887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:04:36.173114 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:36.173672 2078887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:04:36.208296 2078887 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:04:36.208429 2078887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:04:36.272451 2078887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:04:36.263127415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:04:36.272558 2078887 docker.go:319] overlay module found
	I1216 04:04:36.275603 2078887 out.go:179] * Using the docker driver based on existing profile
	I1216 04:04:36.278393 2078887 start.go:309] selected driver: docker
	I1216 04:04:36.278413 2078887 start.go:927] validating driver "docker" against &{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:36.278512 2078887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:04:36.279246 2078887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:04:36.337226 2078887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:04:36.327670673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:04:36.337567 2078887 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:04:36.337598 2078887 cni.go:84] Creating CNI manager for ""
	I1216 04:04:36.337648 2078887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:04:36.337694 2078887 start.go:353] cluster config:
	{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:36.342771 2078887 out.go:179] * Starting "no-preload-255023" primary control-plane node in "no-preload-255023" cluster
	I1216 04:04:36.345786 2078887 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:04:36.348879 2078887 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:04:36.351831 2078887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:04:36.352008 2078887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 04:04:36.352389 2078887 cache.go:107] acquiring lock: {Name:mk0450325aacc7460afde2487596c0895eb14316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352472 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 04:04:36.352485 2078887 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.485µs
	I1216 04:04:36.352508 2078887 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 04:04:36.352528 2078887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:04:36.352738 2078887 cache.go:107] acquiring lock: {Name:mkc870fc6c12b387ee25e1b9ca9a320632395941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352823 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1216 04:04:36.352838 2078887 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 104.014µs
	I1216 04:04:36.352845 2078887 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1216 04:04:36.352864 2078887 cache.go:107] acquiring lock: {Name:mk6b703a23a3ab5a8bd9af36cf3a59f27d4e1f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352901 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1216 04:04:36.352910 2078887 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 48.311µs
	I1216 04:04:36.352917 2078887 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 04:04:36.352934 2078887 cache.go:107] acquiring lock: {Name:mk60dd72305503c0ea2e16b1d16ccd8081a54f90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352967 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1216 04:04:36.352983 2078887 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 49.427µs
	I1216 04:04:36.352990 2078887 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 04:04:36.353002 2078887 cache.go:107] acquiring lock: {Name:mk6fa36dfa510ec7b8233463c2d901c70484a816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353044 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1216 04:04:36.353053 2078887 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 53.111µs
	I1216 04:04:36.353060 2078887 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 04:04:36.353096 2078887 cache.go:107] acquiring lock: {Name:mk65b0b8ff216fe2e0c76a8328b4837c4b65b152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353150 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1216 04:04:36.353161 2078887 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 80.704µs
	I1216 04:04:36.353167 2078887 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1216 04:04:36.353185 2078887 cache.go:107] acquiring lock: {Name:mk91af5531a8fba3ae1331bf11e776d4365c8b42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353224 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1216 04:04:36.353234 2078887 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 51.182µs
	I1216 04:04:36.353241 2078887 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1216 04:04:36.353254 2078887 cache.go:107] acquiring lock: {Name:mke4e5785550dce8ce0ae772cb7060b431e39dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353286 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1216 04:04:36.353295 2078887 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.321µs
	I1216 04:04:36.353301 2078887 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1216 04:04:36.353308 2078887 cache.go:87] Successfully saved all images to host disk.
	I1216 04:04:36.371527 2078887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:04:36.371552 2078887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:04:36.371574 2078887 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:04:36.371606 2078887 start.go:360] acquireMachinesLock for no-preload-255023: {Name:mkc3fbe159f35ba61346866b1384afc1dc23074c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.371679 2078887 start.go:364] duration metric: took 52.75µs to acquireMachinesLock for "no-preload-255023"
	I1216 04:04:36.371703 2078887 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:04:36.371713 2078887 fix.go:54] fixHost starting: 
	I1216 04:04:36.371983 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:36.389190 2078887 fix.go:112] recreateIfNeeded on no-preload-255023: state=Stopped err=<nil>
	W1216 04:04:36.389224 2078887 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:04:36.392475 2078887 out.go:252] * Restarting existing docker container for "no-preload-255023" ...
	I1216 04:04:36.392575 2078887 cli_runner.go:164] Run: docker start no-preload-255023
	I1216 04:04:36.686743 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:36.709621 2078887 kic.go:430] container "no-preload-255023" state is running.
	I1216 04:04:36.710033 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:36.742909 2078887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 04:04:36.743215 2078887 machine.go:94] provisionDockerMachine start ...
	I1216 04:04:36.743307 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:36.771624 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:36.772082 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:36.772113 2078887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:04:36.772685 2078887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51402->127.0.0.1:34664: read: connection reset by peer
	I1216 04:04:39.911257 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 04:04:39.911284 2078887 ubuntu.go:182] provisioning hostname "no-preload-255023"
	I1216 04:04:39.911351 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:39.929644 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:39.929951 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:39.929968 2078887 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-255023 && echo "no-preload-255023" | sudo tee /etc/hostname
	I1216 04:04:40.091006 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 04:04:40.091150 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.111323 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:40.111660 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:40.111686 2078887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-255023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-255023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-255023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:04:40.255648 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:04:40.255679 2078887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:04:40.255708 2078887 ubuntu.go:190] setting up certificates
	I1216 04:04:40.255719 2078887 provision.go:84] configureAuth start
	I1216 04:04:40.255800 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:40.274459 2078887 provision.go:143] copyHostCerts
	I1216 04:04:40.274544 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:04:40.274559 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:04:40.274643 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:04:40.274749 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:04:40.274761 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:04:40.274788 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:04:40.274850 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:04:40.274858 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:04:40.274882 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:04:40.274932 2078887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.no-preload-255023 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-255023]
	I1216 04:04:40.540362 2078887 provision.go:177] copyRemoteCerts
	I1216 04:04:40.540434 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:04:40.540481 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.560258 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.658891 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:04:40.677291 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:04:40.696276 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 04:04:40.714152 2078887 provision.go:87] duration metric: took 458.418313ms to configureAuth
	I1216 04:04:40.714179 2078887 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:04:40.714393 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:40.714406 2078887 machine.go:97] duration metric: took 3.971173434s to provisionDockerMachine
	I1216 04:04:40.714414 2078887 start.go:293] postStartSetup for "no-preload-255023" (driver="docker")
	I1216 04:04:40.714431 2078887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:04:40.714490 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:04:40.714532 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.731640 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.827149 2078887 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:04:40.830526 2078887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:04:40.830554 2078887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:04:40.830567 2078887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:04:40.830622 2078887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:04:40.830706 2078887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:04:40.830809 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:04:40.838400 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:04:40.856071 2078887 start.go:296] duration metric: took 141.636209ms for postStartSetup
	I1216 04:04:40.856173 2078887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:04:40.856212 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.873995 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.968232 2078887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:04:40.973380 2078887 fix.go:56] duration metric: took 4.601659976s for fixHost
	I1216 04:04:40.973407 2078887 start.go:83] releasing machines lock for "no-preload-255023", held for 4.601715131s
	I1216 04:04:40.973483 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:40.991467 2078887 ssh_runner.go:195] Run: cat /version.json
	I1216 04:04:40.991532 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.991607 2078887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:04:40.991672 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:41.016410 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:41.023238 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:41.219027 2078887 ssh_runner.go:195] Run: systemctl --version
	I1216 04:04:41.225735 2078887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:04:41.231530 2078887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:04:41.231614 2078887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:04:41.245369 2078887 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:04:41.245409 2078887 start.go:496] detecting cgroup driver to use...
	I1216 04:04:41.245441 2078887 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:04:41.245491 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:04:41.264763 2078887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:04:41.278940 2078887 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:04:41.279078 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:04:41.295177 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:04:41.308854 2078887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:04:41.425808 2078887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:04:41.539126 2078887 docker.go:234] disabling docker service ...
	I1216 04:04:41.539232 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:04:41.555103 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:04:41.569579 2078887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:04:41.697114 2078887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:04:41.825875 2078887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:04:41.840190 2078887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:04:41.856382 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:04:41.866837 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:04:41.876037 2078887 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:04:41.876170 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:04:41.885348 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:04:41.894763 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:04:41.904382 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:04:41.913120 2078887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:04:41.922033 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:04:41.931520 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:04:41.940760 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:04:41.953916 2078887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:04:41.967109 2078887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:04:41.975264 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:42.127685 2078887 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:04:42.257333 2078887 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:04:42.257501 2078887 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:04:42.262740 2078887 start.go:564] Will wait 60s for crictl version
	I1216 04:04:42.262889 2078887 ssh_runner.go:195] Run: which crictl
	I1216 04:04:42.267776 2078887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:04:42.299498 2078887 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:04:42.299668 2078887 ssh_runner.go:195] Run: containerd --version
	I1216 04:04:42.325553 2078887 ssh_runner.go:195] Run: containerd --version
	I1216 04:04:42.351925 2078887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:04:42.355177 2078887 cli_runner.go:164] Run: docker network inspect no-preload-255023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:04:42.376901 2078887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 04:04:42.381129 2078887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:04:42.391782 2078887 kubeadm.go:884] updating cluster {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:04:42.391898 2078887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:04:42.391946 2078887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:04:42.421381 2078887 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:04:42.421429 2078887 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:04:42.421437 2078887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:04:42.421531 2078887 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-255023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:04:42.421601 2078887 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:04:42.451031 2078887 cni.go:84] Creating CNI manager for ""
	I1216 04:04:42.451088 2078887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:04:42.451111 2078887 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:04:42.451134 2078887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-255023 NodeName:no-preload-255023 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:04:42.451548 2078887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-255023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:04:42.451660 2078887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:04:42.462557 2078887 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:04:42.462665 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:04:42.470706 2078887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:04:42.484036 2078887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:04:42.496679 2078887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 04:04:42.510060 2078887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:04:42.514034 2078887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:04:42.523944 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:42.642280 2078887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:04:42.658128 2078887 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023 for IP: 192.168.85.2
	I1216 04:04:42.658161 2078887 certs.go:195] generating shared ca certs ...
	I1216 04:04:42.658178 2078887 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:42.658357 2078887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:04:42.658425 2078887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:04:42.658440 2078887 certs.go:257] generating profile certs ...
	I1216 04:04:42.658560 2078887 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.key
	I1216 04:04:42.658648 2078887 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key.f898ebc5
	I1216 04:04:42.658713 2078887 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key
	I1216 04:04:42.658847 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:04:42.658904 2078887 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:04:42.658920 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:04:42.658963 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:04:42.659011 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:04:42.659085 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:04:42.659170 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:04:42.659889 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:04:42.682344 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:04:42.731773 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:04:42.759464 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:04:42.781713 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:04:42.800339 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:04:42.819107 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:04:42.837811 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 04:04:42.856139 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:04:42.873711 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:04:42.892395 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:04:42.910549 2078887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:04:42.924760 2078887 ssh_runner.go:195] Run: openssl version
	I1216 04:04:42.931736 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.940294 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:04:42.948204 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.952285 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.952396 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.993553 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:04:43.001452 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.010861 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:04:43.019267 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.023881 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.023989 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.065733 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:04:43.074014 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.082044 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:04:43.090335 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.094833 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.094908 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.137155 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:04:43.145351 2078887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:04:43.149907 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:04:43.192388 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:04:43.235812 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:04:43.277441 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:04:43.318805 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:04:43.360025 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:04:43.402731 2078887 kubeadm.go:401] StartCluster: {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:43.402829 2078887 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:04:43.402928 2078887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:04:43.429949 2078887 cri.go:89] found id: ""
	I1216 04:04:43.430063 2078887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:04:43.452392 2078887 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:04:43.452428 2078887 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:04:43.452517 2078887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:04:43.466566 2078887 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:04:43.467070 2078887 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:43.467226 2078887 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-255023" cluster setting kubeconfig missing "no-preload-255023" context setting]
	I1216 04:04:43.467608 2078887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.469283 2078887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:04:43.485730 2078887 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 04:04:43.485775 2078887 kubeadm.go:602] duration metric: took 33.340688ms to restartPrimaryControlPlane
	I1216 04:04:43.485805 2078887 kubeadm.go:403] duration metric: took 83.08421ms to StartCluster
	I1216 04:04:43.485836 2078887 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.485913 2078887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:43.486639 2078887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.486917 2078887 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:04:43.487330 2078887 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:04:43.487405 2078887 addons.go:70] Setting storage-provisioner=true in profile "no-preload-255023"
	I1216 04:04:43.487422 2078887 addons.go:239] Setting addon storage-provisioner=true in "no-preload-255023"
	I1216 04:04:43.487445 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.488102 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.488423 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:43.488504 2078887 addons.go:70] Setting dashboard=true in profile "no-preload-255023"
	I1216 04:04:43.488521 2078887 addons.go:239] Setting addon dashboard=true in "no-preload-255023"
	W1216 04:04:43.488541 2078887 addons.go:248] addon dashboard should already be in state true
	I1216 04:04:43.488579 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.489074 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.491900 2078887 addons.go:70] Setting default-storageclass=true in profile "no-preload-255023"
	I1216 04:04:43.491932 2078887 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-255023"
	I1216 04:04:43.492873 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.492947 2078887 out.go:179] * Verifying Kubernetes components...
	I1216 04:04:43.501909 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:43.540041 2078887 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 04:04:43.544945 2078887 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 04:04:43.547811 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 04:04:43.547843 2078887 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 04:04:43.547914 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.559171 2078887 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:04:43.559336 2078887 addons.go:239] Setting addon default-storageclass=true in "no-preload-255023"
	I1216 04:04:43.559370 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.559803 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.563234 2078887 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:43.563261 2078887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:04:43.563329 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.613200 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.627516 2078887 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:04:43.627538 2078887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:04:43.627600 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.647225 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.663344 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.730458 2078887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:04:43.761779 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 04:04:43.761800 2078887 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 04:04:43.776412 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 04:04:43.776431 2078887 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 04:04:43.790891 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 04:04:43.790913 2078887 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 04:04:43.792062 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:43.811412 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 04:04:43.811477 2078887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 04:04:43.827119 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 04:04:43.827185 2078887 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 04:04:43.838623 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:04:43.851891 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 04:04:43.851965 2078887 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 04:04:43.868373 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 04:04:43.868445 2078887 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 04:04:43.883425 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 04:04:43.883498 2078887 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 04:04:43.898225 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:43.898297 2078887 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 04:04:43.913600 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:44.438735 2078887 node_ready.go:35] waiting up to 6m0s for node "no-preload-255023" to be "Ready" ...
	W1216 04:04:44.439130 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439312 2078887 retry.go:31] will retry after 305.613762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.439316 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439337 2078887 retry.go:31] will retry after 363.187652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.439533 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439559 2078887 retry.go:31] will retry after 272.903595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.713163 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:44.745739 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:44.781147 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.781180 2078887 retry.go:31] will retry after 329.721194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.803439 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:44.821890 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.821920 2078887 retry.go:31] will retry after 342.537223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.869557 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.869592 2078887 retry.go:31] will retry after 400.087881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.112248 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:45.165426 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:45.247199 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.247273 2078887 retry.go:31] will retry after 632.091254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.270745 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:45.301341 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.301432 2078887 retry.go:31] will retry after 431.279641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:45.357125 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.357163 2078887 retry.go:31] will retry after 448.988888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.733393 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:45.794896 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.794941 2078887 retry.go:31] will retry after 735.19991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.807205 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:45.867083 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.867117 2078887 retry.go:31] will retry after 568.360561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.880293 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:45.942564 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.942654 2078887 retry.go:31] will retry after 591.592305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.436264 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:46.439868 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:04:46.515391 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.515423 2078887 retry.go:31] will retry after 863.502918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.530605 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:46.535089 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:46.607927 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.607962 2078887 retry.go:31] will retry after 1.115944939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:46.613433 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.613467 2078887 retry.go:31] will retry after 961.68966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.379736 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:47.458969 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.459090 2078887 retry.go:31] will retry after 1.606575866s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.575407 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:47.642476 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.642514 2078887 retry.go:31] will retry after 2.560273252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.724901 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:47.785232 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.785270 2078887 retry.go:31] will retry after 2.616642999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:48.939418 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:49.066818 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:49.131769 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:49.131806 2078887 retry.go:31] will retry after 3.366815571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.203910 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:50.281554 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.281591 2078887 retry.go:31] will retry after 3.322699521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.403034 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:50.475418 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.475451 2078887 retry.go:31] will retry after 3.920781833s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:50.940166 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:52.499306 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:52.566228 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:52.566262 2078887 retry.go:31] will retry after 2.315880156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:53.440268 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:53.604610 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:53.664371 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:53.664411 2078887 retry.go:31] will retry after 4.867931094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.396477 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:54.458906 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.458940 2078887 retry.go:31] will retry after 6.25682185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.882414 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:54.945906 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.945940 2078887 retry.go:31] will retry after 8.419891658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:55.939826 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:04:58.439821 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:58.533209 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:58.597277 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:58.597315 2078887 retry.go:31] will retry after 8.821330278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:00.440193 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:00.716680 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:00.792490 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:00.792525 2078887 retry.go:31] will retry after 4.988340186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:02.939239 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:03.366954 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:03.427635 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:03.427664 2078887 retry.go:31] will retry after 11.977275357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:04.939595 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:05.781026 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:05.843492 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:05.843528 2078887 retry.go:31] will retry after 12.145550583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:06.939757 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:07.419555 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:07.505584 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:07.505621 2078887 retry.go:31] will retry after 12.780052365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:08.940202 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:11.440118 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:13.940295 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:15.405274 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:15.464480 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:15.464513 2078887 retry.go:31] will retry after 7.284769957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:16.439936 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:17.989703 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:18.058004 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:18.058043 2078887 retry.go:31] will retry after 16.677849322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:18.440048 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:20.286526 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:20.345776 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:20.345812 2078887 retry.go:31] will retry after 16.385541559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:20.939362 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:22.749528 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:22.811867 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:22.811905 2078887 retry.go:31] will retry after 14.258552084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:22.939418 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:25.439972 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:27.940078 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:30.440042 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:32.939848 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:34.736331 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:34.794887 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:34.794921 2078887 retry.go:31] will retry after 31.126157271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:35.439532 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:36.732300 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:36.795769 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:36.795804 2078887 retry.go:31] will retry after 23.567098644s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:37.070890 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:37.130033 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:37.130066 2078887 retry.go:31] will retry after 22.575569039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:37.439758 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:39.439923 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:41.939932 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:44.439453 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:46.440129 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:48.939948 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:51.440009 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:53.939968 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:56.439836 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:58.939363 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:59.706828 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:59.787641 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:59.787749 2078887 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:06:00.368445 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:06:00.461740 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:06:00.461775 2078887 retry.go:31] will retry after 38.977225184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:00.939472 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:03.439401 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:05.439853 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:05.921308 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:06:06.013608 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:06:06.013643 2078887 retry.go:31] will retry after 27.262873571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:07.440089 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:09.440233 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:11.939830 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:13.940070 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:16.440297 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:18.940013 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:21.439376 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:23.440046 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:25.440167 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:27.939439 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:29.939765 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:31.940029 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:33.277682 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:06:33.336094 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:33.336187 2078887 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1216 04:06:34.439449 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:36.440148 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:38.939781 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:39.439610 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:06:39.498473 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:39.498576 2078887 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:06:39.501951 2078887 out.go:179] * Enabled addons: 
	I1216 04:06:39.505615 2078887 addons.go:530] duration metric: took 1m56.018282146s for enable addons: enabled=[]
	W1216 04:06:40.939880 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:42.942826 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:45.439332 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:47.439764 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:49.439998 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:51.939468 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:53.940191 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:56.440248 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:58.940043 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:01.439817 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:03.440122 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:05.940233 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:08.440149 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:10.440203 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:12.940001 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:15.439450 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:17.940353 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:20.439655 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:22.939251 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:24.939985 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:26.940134 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:29.439278 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:31.440124 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:33.940184 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:36.440157 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:38.940079 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:41.440316 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:43.939340 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:45.939890 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:47.940062 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:50.439942 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:52.440011 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:54.440323 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:56.939899 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:59.439323 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:01.439378 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:03.939463 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:06.439343 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:08.440329 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:10.939526 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:13.439382 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:15.439798 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:17.939298 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:20.439304 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:22.439457 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:24.939305 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:26.940098 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:28.940259 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:31.439311 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:33.440174 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:35.939344 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:37.940078 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:40.439899 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:42.939335 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:44.939411 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:46.939890 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:49.439873 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:51.440082 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:53.939432 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:55.939868 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:57.940176 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:00.440304 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:02.939752 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:05.439377 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:07.939797 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:09.939985 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:12.439307 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:14.439467 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:16.939931 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:18.940258 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:21.439950 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:23.440228 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:25.939594 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:27.940217 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:30.439353 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:32.439932 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:34.939928 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:36.940259 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:39.439438 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:41.440249 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:43.939955 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:45.940052 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:48.439303 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:50.940139 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:52.940243 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:55.440234 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:57.939424 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:00.445310 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:02.940226 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:05.440272 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:07.939309 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:09.939394 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:12.439365 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:14.440073 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:16.939954 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:19.439321 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:21.440142 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:23.440223 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:25.939694 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:27.940085 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:30.440080 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:32.939316 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:34.939494 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:36.940361 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:39.439999 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:41.939464 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:44.438951 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1216 04:10:44.438989 2078887 node_ready.go:38] duration metric: took 6m0.000194453s for node "no-preload-255023" to be "Ready" ...
	I1216 04:10:44.442224 2078887 out.go:203] 
	W1216 04:10:44.445097 2078887 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 04:10:44.445127 2078887 out.go:285] * 
	* 
	W1216 04:10:44.447308 2078887 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:10:44.450299 2078887 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-255023
helpers_test.go:244: (dbg) docker inspect no-preload-255023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	        "Created": "2025-12-16T03:54:15.810217174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2079014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:04:36.43296942Z",
	            "FinishedAt": "2025-12-16T04:04:35.01536344Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e-json.log",
	        "Name": "/no-preload-255023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-255023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-255023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	                "LowerDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-255023",
	                "Source": "/var/lib/docker/volumes/no-preload-255023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-255023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-255023",
	                "name.minikube.sigs.k8s.io": "no-preload-255023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8d77d7563a5b808d67c856f8fa0badaaabd481cb09d94e5909e754d7a8568f2",
	            "SandboxKey": "/var/run/docker/netns/e8d77d7563a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34664"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34665"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34668"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34666"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34667"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-255023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:af:07:e2:16:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba784dbb0bf675265a222a2ccbfc260249ee6464ab188d5ef5e9194204ab459f",
	                    "EndpointID": "d7abbd133c0576ac3aee0fa6c955e27a282475749fdbc6a2ade67d17e9ffc12d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-255023",
	                        "9e19dbb9154c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023: exit status 2 (335.581914ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-255023 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-092028 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-092028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:58 UTC │
	│ image   │ embed-certs-092028 image list --format=json                                                                                                                                                                                                                │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ pause   │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ unpause │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:02 UTC │                     │
	│ stop    │ -p no-preload-255023 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ addons  │ enable dashboard -p no-preload-255023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ start   │ -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:04:36
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:04:36.142328 2078887 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:04:36.142562 2078887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:04:36.142588 2078887 out.go:374] Setting ErrFile to fd 2...
	I1216 04:04:36.142607 2078887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:04:36.142894 2078887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:04:36.143393 2078887 out.go:368] Setting JSON to false
	I1216 04:04:36.144368 2078887 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35221,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:04:36.144465 2078887 start.go:143] virtualization:  
	I1216 04:04:36.150070 2078887 out.go:179] * [no-preload-255023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:04:36.153020 2078887 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:04:36.153105 2078887 notify.go:221] Checking for updates...
	I1216 04:04:36.158759 2078887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:04:36.161685 2078887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:36.164397 2078887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:04:36.167148 2078887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:04:36.169926 2078887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:04:36.173114 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:36.173672 2078887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:04:36.208296 2078887 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:04:36.208429 2078887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:04:36.272451 2078887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:04:36.263127415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:04:36.272558 2078887 docker.go:319] overlay module found
	I1216 04:04:36.275603 2078887 out.go:179] * Using the docker driver based on existing profile
	I1216 04:04:36.278393 2078887 start.go:309] selected driver: docker
	I1216 04:04:36.278413 2078887 start.go:927] validating driver "docker" against &{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:36.278512 2078887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:04:36.279246 2078887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:04:36.337226 2078887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:04:36.327670673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:04:36.337567 2078887 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:04:36.337598 2078887 cni.go:84] Creating CNI manager for ""
	I1216 04:04:36.337648 2078887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:04:36.337694 2078887 start.go:353] cluster config:
	{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:36.342771 2078887 out.go:179] * Starting "no-preload-255023" primary control-plane node in "no-preload-255023" cluster
	I1216 04:04:36.345786 2078887 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:04:36.348879 2078887 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:04:36.351831 2078887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:04:36.352008 2078887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 04:04:36.352389 2078887 cache.go:107] acquiring lock: {Name:mk0450325aacc7460afde2487596c0895eb14316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352472 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 04:04:36.352485 2078887 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.485µs
	I1216 04:04:36.352508 2078887 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 04:04:36.352528 2078887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:04:36.352738 2078887 cache.go:107] acquiring lock: {Name:mkc870fc6c12b387ee25e1b9ca9a320632395941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352823 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1216 04:04:36.352838 2078887 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 104.014µs
	I1216 04:04:36.352845 2078887 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1216 04:04:36.352864 2078887 cache.go:107] acquiring lock: {Name:mk6b703a23a3ab5a8bd9af36cf3a59f27d4e1f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352901 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1216 04:04:36.352910 2078887 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 48.311µs
	I1216 04:04:36.352917 2078887 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 04:04:36.352934 2078887 cache.go:107] acquiring lock: {Name:mk60dd72305503c0ea2e16b1d16ccd8081a54f90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352967 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1216 04:04:36.352983 2078887 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 49.427µs
	I1216 04:04:36.352990 2078887 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 04:04:36.353002 2078887 cache.go:107] acquiring lock: {Name:mk6fa36dfa510ec7b8233463c2d901c70484a816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353044 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1216 04:04:36.353053 2078887 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 53.111µs
	I1216 04:04:36.353060 2078887 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 04:04:36.353096 2078887 cache.go:107] acquiring lock: {Name:mk65b0b8ff216fe2e0c76a8328b4837c4b65b152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353150 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1216 04:04:36.353161 2078887 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 80.704µs
	I1216 04:04:36.353167 2078887 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1216 04:04:36.353185 2078887 cache.go:107] acquiring lock: {Name:mk91af5531a8fba3ae1331bf11e776d4365c8b42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353224 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1216 04:04:36.353234 2078887 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 51.182µs
	I1216 04:04:36.353241 2078887 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1216 04:04:36.353254 2078887 cache.go:107] acquiring lock: {Name:mke4e5785550dce8ce0ae772cb7060b431e39dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353286 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1216 04:04:36.353295 2078887 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.321µs
	I1216 04:04:36.353301 2078887 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1216 04:04:36.353308 2078887 cache.go:87] Successfully saved all images to host disk.
	I1216 04:04:36.371527 2078887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:04:36.371552 2078887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:04:36.371574 2078887 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:04:36.371606 2078887 start.go:360] acquireMachinesLock for no-preload-255023: {Name:mkc3fbe159f35ba61346866b1384afc1dc23074c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.371679 2078887 start.go:364] duration metric: took 52.75µs to acquireMachinesLock for "no-preload-255023"
	I1216 04:04:36.371703 2078887 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:04:36.371713 2078887 fix.go:54] fixHost starting: 
	I1216 04:04:36.371983 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:36.389190 2078887 fix.go:112] recreateIfNeeded on no-preload-255023: state=Stopped err=<nil>
	W1216 04:04:36.389224 2078887 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:04:36.392475 2078887 out.go:252] * Restarting existing docker container for "no-preload-255023" ...
	I1216 04:04:36.392575 2078887 cli_runner.go:164] Run: docker start no-preload-255023
	I1216 04:04:36.686743 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:36.709621 2078887 kic.go:430] container "no-preload-255023" state is running.
	I1216 04:04:36.710033 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:36.742909 2078887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 04:04:36.743215 2078887 machine.go:94] provisionDockerMachine start ...
	I1216 04:04:36.743307 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:36.771624 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:36.772082 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:36.772113 2078887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:04:36.772685 2078887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51402->127.0.0.1:34664: read: connection reset by peer
	I1216 04:04:39.911257 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 04:04:39.911284 2078887 ubuntu.go:182] provisioning hostname "no-preload-255023"
	I1216 04:04:39.911351 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:39.929644 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:39.929951 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:39.929968 2078887 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-255023 && echo "no-preload-255023" | sudo tee /etc/hostname
	I1216 04:04:40.091006 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 04:04:40.091150 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.111323 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:40.111660 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:40.111686 2078887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-255023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-255023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-255023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:04:40.255648 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:04:40.255679 2078887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:04:40.255708 2078887 ubuntu.go:190] setting up certificates
	I1216 04:04:40.255719 2078887 provision.go:84] configureAuth start
	I1216 04:04:40.255800 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:40.274459 2078887 provision.go:143] copyHostCerts
	I1216 04:04:40.274544 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:04:40.274559 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:04:40.274643 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:04:40.274749 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:04:40.274761 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:04:40.274788 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:04:40.274850 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:04:40.274858 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:04:40.274882 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:04:40.274932 2078887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.no-preload-255023 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-255023]
	I1216 04:04:40.540362 2078887 provision.go:177] copyRemoteCerts
	I1216 04:04:40.540434 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:04:40.540481 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.560258 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.658891 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:04:40.677291 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:04:40.696276 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 04:04:40.714152 2078887 provision.go:87] duration metric: took 458.418313ms to configureAuth
	I1216 04:04:40.714179 2078887 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:04:40.714393 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:40.714406 2078887 machine.go:97] duration metric: took 3.971173434s to provisionDockerMachine
	I1216 04:04:40.714414 2078887 start.go:293] postStartSetup for "no-preload-255023" (driver="docker")
	I1216 04:04:40.714431 2078887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:04:40.714490 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:04:40.714532 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.731640 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.827149 2078887 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:04:40.830526 2078887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:04:40.830554 2078887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:04:40.830567 2078887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:04:40.830622 2078887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:04:40.830706 2078887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:04:40.830809 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:04:40.838400 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:04:40.856071 2078887 start.go:296] duration metric: took 141.636209ms for postStartSetup
	I1216 04:04:40.856173 2078887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:04:40.856212 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.873995 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.968232 2078887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:04:40.973380 2078887 fix.go:56] duration metric: took 4.601659976s for fixHost
	I1216 04:04:40.973407 2078887 start.go:83] releasing machines lock for "no-preload-255023", held for 4.601715131s
	I1216 04:04:40.973483 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:40.991467 2078887 ssh_runner.go:195] Run: cat /version.json
	I1216 04:04:40.991532 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.991607 2078887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:04:40.991672 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:41.016410 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:41.023238 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:41.219027 2078887 ssh_runner.go:195] Run: systemctl --version
	I1216 04:04:41.225735 2078887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:04:41.231530 2078887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:04:41.231614 2078887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:04:41.245369 2078887 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:04:41.245409 2078887 start.go:496] detecting cgroup driver to use...
	I1216 04:04:41.245441 2078887 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:04:41.245491 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:04:41.264763 2078887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:04:41.278940 2078887 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:04:41.279078 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:04:41.295177 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:04:41.308854 2078887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:04:41.425808 2078887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:04:41.539126 2078887 docker.go:234] disabling docker service ...
	I1216 04:04:41.539232 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:04:41.555103 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:04:41.569579 2078887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:04:41.697114 2078887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:04:41.825875 2078887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:04:41.840190 2078887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:04:41.856382 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:04:41.866837 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:04:41.876037 2078887 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:04:41.876170 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:04:41.885348 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:04:41.894763 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:04:41.904382 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:04:41.913120 2078887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:04:41.922033 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:04:41.931520 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:04:41.940760 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:04:41.953916 2078887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:04:41.967109 2078887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:04:41.975264 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:42.127685 2078887 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:04:42.257333 2078887 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:04:42.257501 2078887 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:04:42.262740 2078887 start.go:564] Will wait 60s for crictl version
	I1216 04:04:42.262889 2078887 ssh_runner.go:195] Run: which crictl
	I1216 04:04:42.267776 2078887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:04:42.299498 2078887 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:04:42.299668 2078887 ssh_runner.go:195] Run: containerd --version
	I1216 04:04:42.325553 2078887 ssh_runner.go:195] Run: containerd --version
	I1216 04:04:42.351925 2078887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:04:42.355177 2078887 cli_runner.go:164] Run: docker network inspect no-preload-255023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:04:42.376901 2078887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 04:04:42.381129 2078887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:04:42.391782 2078887 kubeadm.go:884] updating cluster {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:04:42.391898 2078887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:04:42.391946 2078887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:04:42.421381 2078887 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:04:42.421429 2078887 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:04:42.421437 2078887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:04:42.421531 2078887 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-255023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:04:42.421601 2078887 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:04:42.451031 2078887 cni.go:84] Creating CNI manager for ""
	I1216 04:04:42.451088 2078887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:04:42.451111 2078887 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:04:42.451134 2078887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-255023 NodeName:no-preload-255023 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:04:42.451548 2078887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-255023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:04:42.451660 2078887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:04:42.462557 2078887 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:04:42.462665 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:04:42.470706 2078887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:04:42.484036 2078887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:04:42.496679 2078887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 04:04:42.510060 2078887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:04:42.514034 2078887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:04:42.523944 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:42.642280 2078887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:04:42.658128 2078887 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023 for IP: 192.168.85.2
	I1216 04:04:42.658161 2078887 certs.go:195] generating shared ca certs ...
	I1216 04:04:42.658178 2078887 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:42.658357 2078887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:04:42.658425 2078887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:04:42.658440 2078887 certs.go:257] generating profile certs ...
	I1216 04:04:42.658560 2078887 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.key
	I1216 04:04:42.658648 2078887 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key.f898ebc5
	I1216 04:04:42.658713 2078887 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key
	I1216 04:04:42.658847 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:04:42.658904 2078887 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:04:42.658920 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:04:42.658963 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:04:42.659011 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:04:42.659085 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:04:42.659170 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:04:42.659889 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:04:42.682344 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:04:42.731773 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:04:42.759464 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:04:42.781713 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:04:42.800339 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:04:42.819107 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:04:42.837811 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 04:04:42.856139 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:04:42.873711 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:04:42.892395 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:04:42.910549 2078887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:04:42.924760 2078887 ssh_runner.go:195] Run: openssl version
	I1216 04:04:42.931736 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.940294 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:04:42.948204 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.952285 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.952396 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.993553 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:04:43.001452 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.010861 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:04:43.019267 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.023881 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.023989 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.065733 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:04:43.074014 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.082044 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:04:43.090335 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.094833 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.094908 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.137155 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:04:43.145351 2078887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:04:43.149907 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:04:43.192388 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:04:43.235812 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:04:43.277441 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:04:43.318805 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:04:43.360025 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:04:43.402731 2078887 kubeadm.go:401] StartCluster: {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:43.402829 2078887 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:04:43.402928 2078887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:04:43.429949 2078887 cri.go:89] found id: ""
	I1216 04:04:43.430063 2078887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:04:43.452392 2078887 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:04:43.452428 2078887 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:04:43.452517 2078887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:04:43.466566 2078887 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:04:43.467070 2078887 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:43.467226 2078887 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-255023" cluster setting kubeconfig missing "no-preload-255023" context setting]
	I1216 04:04:43.467608 2078887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.469283 2078887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:04:43.485730 2078887 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 04:04:43.485775 2078887 kubeadm.go:602] duration metric: took 33.340688ms to restartPrimaryControlPlane
	I1216 04:04:43.485805 2078887 kubeadm.go:403] duration metric: took 83.08421ms to StartCluster
	I1216 04:04:43.485836 2078887 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.485913 2078887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:43.486639 2078887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.486917 2078887 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:04:43.487330 2078887 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:04:43.487405 2078887 addons.go:70] Setting storage-provisioner=true in profile "no-preload-255023"
	I1216 04:04:43.487422 2078887 addons.go:239] Setting addon storage-provisioner=true in "no-preload-255023"
	I1216 04:04:43.487445 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.488102 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.488423 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:43.488504 2078887 addons.go:70] Setting dashboard=true in profile "no-preload-255023"
	I1216 04:04:43.488521 2078887 addons.go:239] Setting addon dashboard=true in "no-preload-255023"
	W1216 04:04:43.488541 2078887 addons.go:248] addon dashboard should already be in state true
	I1216 04:04:43.488579 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.489074 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.491900 2078887 addons.go:70] Setting default-storageclass=true in profile "no-preload-255023"
	I1216 04:04:43.491932 2078887 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-255023"
	I1216 04:04:43.492873 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.492947 2078887 out.go:179] * Verifying Kubernetes components...
	I1216 04:04:43.501909 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:43.540041 2078887 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 04:04:43.544945 2078887 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 04:04:43.547811 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 04:04:43.547843 2078887 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 04:04:43.547914 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.559171 2078887 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:04:43.559336 2078887 addons.go:239] Setting addon default-storageclass=true in "no-preload-255023"
	I1216 04:04:43.559370 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.559803 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.563234 2078887 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:43.563261 2078887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:04:43.563329 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.613200 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.627516 2078887 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:04:43.627538 2078887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:04:43.627600 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.647225 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.663344 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.730458 2078887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:04:43.761779 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 04:04:43.761800 2078887 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 04:04:43.776412 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 04:04:43.776431 2078887 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 04:04:43.790891 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 04:04:43.790913 2078887 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 04:04:43.792062 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:43.811412 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 04:04:43.811477 2078887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 04:04:43.827119 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 04:04:43.827185 2078887 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 04:04:43.838623 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:04:43.851891 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 04:04:43.851965 2078887 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 04:04:43.868373 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 04:04:43.868445 2078887 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 04:04:43.883425 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 04:04:43.883498 2078887 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 04:04:43.898225 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:43.898297 2078887 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 04:04:43.913600 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:44.438735 2078887 node_ready.go:35] waiting up to 6m0s for node "no-preload-255023" to be "Ready" ...
	W1216 04:04:44.439130 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439312 2078887 retry.go:31] will retry after 305.613762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.439316 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439337 2078887 retry.go:31] will retry after 363.187652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.439533 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439559 2078887 retry.go:31] will retry after 272.903595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.713163 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:44.745739 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:44.781147 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.781180 2078887 retry.go:31] will retry after 329.721194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.803439 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:44.821890 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.821920 2078887 retry.go:31] will retry after 342.537223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.869557 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.869592 2078887 retry.go:31] will retry after 400.087881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.112248 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:45.165426 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:45.247199 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.247273 2078887 retry.go:31] will retry after 632.091254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.270745 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:45.301341 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.301432 2078887 retry.go:31] will retry after 431.279641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:45.357125 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.357163 2078887 retry.go:31] will retry after 448.988888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.733393 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:45.794896 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.794941 2078887 retry.go:31] will retry after 735.19991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.807205 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:45.867083 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.867117 2078887 retry.go:31] will retry after 568.360561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.880293 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:45.942564 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.942654 2078887 retry.go:31] will retry after 591.592305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.436264 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:46.439868 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:04:46.515391 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.515423 2078887 retry.go:31] will retry after 863.502918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.530605 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:46.535089 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:46.607927 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.607962 2078887 retry.go:31] will retry after 1.115944939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:46.613433 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.613467 2078887 retry.go:31] will retry after 961.68966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.379736 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:47.458969 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.459090 2078887 retry.go:31] will retry after 1.606575866s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.575407 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:47.642476 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.642514 2078887 retry.go:31] will retry after 2.560273252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.724901 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:47.785232 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.785270 2078887 retry.go:31] will retry after 2.616642999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:48.939418 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:49.066818 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:49.131769 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:49.131806 2078887 retry.go:31] will retry after 3.366815571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.203910 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:50.281554 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.281591 2078887 retry.go:31] will retry after 3.322699521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.403034 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:50.475418 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.475451 2078887 retry.go:31] will retry after 3.920781833s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:50.940166 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:52.499306 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:52.566228 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:52.566262 2078887 retry.go:31] will retry after 2.315880156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:53.440268 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:53.604610 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:53.664371 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:53.664411 2078887 retry.go:31] will retry after 4.867931094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.396477 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:54.458906 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.458940 2078887 retry.go:31] will retry after 6.25682185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.882414 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:54.945906 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.945940 2078887 retry.go:31] will retry after 8.419891658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:55.939826 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:04:58.439821 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:58.533209 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:58.597277 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:58.597315 2078887 retry.go:31] will retry after 8.821330278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:00.440193 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:00.716680 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:00.792490 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:00.792525 2078887 retry.go:31] will retry after 4.988340186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:02.939239 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:03.366954 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:03.427635 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:03.427664 2078887 retry.go:31] will retry after 11.977275357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:04.939595 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:05.781026 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:05.843492 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:05.843528 2078887 retry.go:31] will retry after 12.145550583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:06.939757 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:07.419555 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:07.505584 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:07.505621 2078887 retry.go:31] will retry after 12.780052365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:08.940202 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:11.440118 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:13.940295 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:15.405274 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:15.464480 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:15.464513 2078887 retry.go:31] will retry after 7.284769957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:16.439936 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:17.989703 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:18.058004 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:18.058043 2078887 retry.go:31] will retry after 16.677849322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:18.440048 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:20.286526 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:20.345776 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:20.345812 2078887 retry.go:31] will retry after 16.385541559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:20.939362 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:22.749528 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:22.811867 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:22.811905 2078887 retry.go:31] will retry after 14.258552084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:22.939418 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:25.439972 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:27.940078 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:30.440042 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:32.939848 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:34.736331 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:34.794887 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:34.794921 2078887 retry.go:31] will retry after 31.126157271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:35.439532 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:36.732300 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:36.795769 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:36.795804 2078887 retry.go:31] will retry after 23.567098644s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:37.070890 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:37.130033 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:37.130066 2078887 retry.go:31] will retry after 22.575569039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:37.439758 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:39.439923 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:41.939932 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:44.439453 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:46.440129 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:48.939948 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:51.440009 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:53.939968 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:57.654978 2073073 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000290251s
	I1216 04:05:57.655319 2073073 kubeadm.go:319] 
	I1216 04:05:57.655442 2073073 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:05:57.655501 2073073 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:05:57.655753 2073073 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:05:57.655760 2073073 kubeadm.go:319] 
	I1216 04:05:57.656095 2073073 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:05:57.656161 2073073 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:05:57.656330 2073073 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:05:57.656339 2073073 kubeadm.go:319] 
	I1216 04:05:57.661429 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:05:57.661908 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:05:57.662048 2073073 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:05:57.662311 2073073 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:05:57.662326 2073073 kubeadm.go:319] 
	I1216 04:05:57.662412 2073073 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:05:57.662579 2073073 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000290251s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:05:57.662661 2073073 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 04:05:58.084120 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:05:58.098877 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:05:58.098960 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:05:58.107810 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:05:58.107839 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:05:58.107907 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:05:58.116252 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:05:58.116319 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:05:58.123966 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:05:58.131928 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:05:58.131999 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:05:58.139938 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:05:58.148354 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:05:58.148421 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:05:58.155951 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:05:58.163949 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:05:58.164019 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:05:58.172134 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:05:58.209714 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:05:58.209936 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:05:58.280761 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:05:58.280869 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:05:58.280943 2073073 kubeadm.go:319] OS: Linux
	I1216 04:05:58.281014 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:05:58.281081 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:05:58.281135 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:05:58.281192 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:05:58.281251 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:05:58.281316 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:05:58.281370 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:05:58.281425 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:05:58.281480 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:05:58.347935 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:05:58.348070 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:05:58.348235 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:05:58.355578 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:05:58.361023 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:05:58.361193 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:05:58.361322 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:05:58.361438 2073073 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:05:58.361549 2073073 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:05:58.361663 2073073 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:05:58.367384 2073073 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:05:58.367458 2073073 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:05:58.367521 2073073 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:05:58.367595 2073073 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:05:58.367668 2073073 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:05:58.367706 2073073 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:05:58.367762 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:05:58.550047 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:05:59.040542 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:05:59.832816 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:06:00.196554 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:06:00.344590 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:06:00.344735 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:06:00.344804 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:06:00.348130 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:06:00.348264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:06:00.348345 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:06:00.348416 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	W1216 04:05:56.439836 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:58.939363 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:59.706828 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:59.787641 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:59.787749 2078887 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:06:00.368445 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:06:00.461740 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:06:00.461775 2078887 retry.go:31] will retry after 38.977225184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:00.939472 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:00.386806 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:06:00.386918 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:06:00.399084 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:06:00.400082 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:06:00.400137 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:06:00.555518 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:06:00.555632 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1216 04:06:03.439401 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:05.439853 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:05.921308 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:06:06.013608 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:06:06.013643 2078887 retry.go:31] will retry after 27.262873571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:07.440089 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:09.440233 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:11.939830 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:13.940070 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:16.440297 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:18.940013 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:21.439376 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:23.440046 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:25.440167 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:27.939439 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:29.939765 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:31.940029 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:33.277682 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:06:33.336094 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:33.336187 2078887 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1216 04:06:34.439449 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:36.440148 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:38.939781 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:39.439610 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:06:39.498473 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:39.498576 2078887 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:06:39.501951 2078887 out.go:179] * Enabled addons: 
	I1216 04:06:39.505615 2078887 addons.go:530] duration metric: took 1m56.018282146s for enable addons: enabled=[]
	W1216 04:06:40.939880 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:42.942826 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:45.439332 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:47.439764 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:49.439998 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:51.939468 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:53.940191 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:56.440248 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:58.940043 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:01.439817 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:03.440122 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:05.940233 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:08.440149 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:10.440203 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:12.940001 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:15.439450 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:17.940353 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:20.439655 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:22.939251 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:24.939985 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:26.940134 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:29.439278 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:31.440124 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:33.940184 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:36.440157 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:38.940079 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:41.440316 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:43.939340 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:45.939890 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:47.940062 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:50.439942 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:52.440011 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:54.440323 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:56.939899 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:59.439323 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:01.439378 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:03.939463 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:06.439343 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:08.440329 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:10.939526 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:13.439382 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:15.439798 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:17.939298 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:20.439304 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:22.439457 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:24.939305 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:26.940098 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:28.940259 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:31.439311 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:33.440174 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:35.939344 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:37.940078 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:40.439899 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:42.939335 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:44.939411 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:46.939890 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:49.439873 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:51.440082 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:53.939432 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:55.939868 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:57.940176 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:00.440304 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:02.939752 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:05.439377 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:07.939797 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:09.939985 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:12.439307 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:14.439467 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:16.939931 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:18.940258 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:21.439950 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:23.440228 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:25.939594 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:27.940217 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:30.439353 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:32.439932 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:34.939928 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:36.940259 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:39.439438 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:41.440249 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:43.939955 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:45.940052 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:48.439303 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:50.940139 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:52.940243 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:55.440234 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:10:00.553098 2073073 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000446161s
	I1216 04:10:00.553139 2073073 kubeadm.go:319] 
	I1216 04:10:00.553240 2073073 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:10:00.553447 2073073 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:10:00.553632 2073073 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:10:00.553642 2073073 kubeadm.go:319] 
	I1216 04:10:00.554240 2073073 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:10:00.554310 2073073 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:10:00.554364 2073073 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:10:00.554369 2073073 kubeadm.go:319] 
	I1216 04:10:00.559672 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:10:00.560440 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:10:00.560638 2073073 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:10:00.560897 2073073 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:10:00.560908 2073073 kubeadm.go:319] 
	I1216 04:10:00.561045 2073073 kubeadm.go:403] duration metric: took 8m7.05045578s to StartCluster
	I1216 04:10:00.561088 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:10:00.561095 2073073 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:10:00.561160 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:10:00.591750 2073073 cri.go:89] found id: ""
	I1216 04:10:00.591842 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.591857 2073073 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:10:00.591866 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:10:00.591936 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:10:00.621403 2073073 cri.go:89] found id: ""
	I1216 04:10:00.621441 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.621454 2073073 logs.go:284] No container was found matching "etcd"
	I1216 04:10:00.621463 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:10:00.621538 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:10:00.650404 2073073 cri.go:89] found id: ""
	I1216 04:10:00.650434 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.650444 2073073 logs.go:284] No container was found matching "coredns"
	I1216 04:10:00.650451 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:10:00.650524 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:10:00.680445 2073073 cri.go:89] found id: ""
	I1216 04:10:00.680521 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.680536 2073073 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:10:00.680543 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:10:00.680611 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:10:00.710361 2073073 cri.go:89] found id: ""
	I1216 04:10:00.710396 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.710406 2073073 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:10:00.710412 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:10:00.710473 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:10:00.748242 2073073 cri.go:89] found id: ""
	I1216 04:10:00.748318 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.748352 2073073 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:10:00.748389 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:10:00.748488 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:10:00.780257 2073073 cri.go:89] found id: ""
	I1216 04:10:00.780341 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.780365 2073073 logs.go:284] No container was found matching "kindnet"
	I1216 04:10:00.780402 2073073 logs.go:123] Gathering logs for kubelet ...
	I1216 04:10:00.780432 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:10:00.837018 2073073 logs.go:123] Gathering logs for dmesg ...
	I1216 04:10:00.837057 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:10:00.854084 2073073 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:10:00.854114 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:10:00.921357 2073073 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:10:00.911353    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.912410    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.913336    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915304    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915802    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:10:00.911353    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.912410    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.913336    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915304    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915802    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:10:00.921436 2073073 logs.go:123] Gathering logs for containerd ...
	I1216 04:10:00.921463 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:10:00.962148 2073073 logs.go:123] Gathering logs for container status ...
	I1216 04:10:00.962187 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:10:00.992078 2073073 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:10:00.992139 2073073 out.go:285] * 
	W1216 04:10:00.992191 2073073 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:10:00.992219 2073073 out.go:285] * 
	W1216 04:10:00.994876 2073073 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:10:01.000867 2073073 out.go:203] 
	W1216 04:10:01.005125 2073073 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:10:01.005428 2073073 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:10:01.005513 2073073 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:10:01.011350 2073073 out.go:203] 
	W1216 04:09:57.939424 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:00.445310 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:02.940226 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:05.440272 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:07.939309 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:09.939394 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:12.439365 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:14.440073 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:16.939954 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:19.439321 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:21.440142 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:23.440223 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:25.939694 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:27.940085 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:30.440080 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:32.939316 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:34.939494 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:36.940361 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:39.439999 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:41.939464 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:44.438951 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1216 04:10:44.438989 2078887 node_ready.go:38] duration metric: took 6m0.000194453s for node "no-preload-255023" to be "Ready" ...
	I1216 04:10:44.442224 2078887 out.go:203] 
	W1216 04:10:44.445097 2078887 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 04:10:44.445127 2078887 out.go:285] * 
	W1216 04:10:44.447308 2078887 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:10:44.450299 2078887 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207708993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207730006Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207767502Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207785201Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207802406Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207818545Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207829860Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207852456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207871098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207912377Z" level=info msg="Connect containerd service"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.208248542Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.209178851Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.222779090Z" level=info msg="Start subscribing containerd event"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223109158Z" level=info msg="Start recovering state"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223239174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223305749Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252281513Z" level=info msg="Start event monitor"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252522812Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252604139Z" level=info msg="Start streaming server"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252676581Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252754257Z" level=info msg="runtime interface starting up..."
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252816426Z" level=info msg="starting plugins..."
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.253604035Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.254042786Z" level=info msg="containerd successfully booted in 0.075355s"
	Dec 16 04:04:42 no-preload-255023 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:10:45.797127    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:45.797698    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:45.799445    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:45.800108    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:45.801734    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:10:45 up  9:53,  0 user,  load average: 0.63, 0.72, 1.38
	Linux no-preload-255023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:10:42 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:10:43 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 16 04:10:43 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:43 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:43 no-preload-255023 kubelet[3771]: E1216 04:10:43.229321    3771 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:10:43 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:10:43 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:10:43 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 16 04:10:43 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:43 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:43 no-preload-255023 kubelet[3776]: E1216 04:10:43.984124    3776 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:10:43 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:10:43 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:10:44 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 16 04:10:44 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:44 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:44 no-preload-255023 kubelet[3782]: E1216 04:10:44.759193    3782 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:10:44 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:10:44 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:10:45 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 16 04:10:45 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:45 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:10:45 no-preload-255023 kubelet[3809]: E1216 04:10:45.493607    3809 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:10:45 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:10:45 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 2 (350.640213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (105.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1216 04:10:05.534784 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:10:33.237938 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m43.533368166s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-450938
helpers_test.go:244: (dbg) docker inspect newest-cni-450938:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	        "Created": "2025-12-16T04:01:45.321904496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2073503,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:01:45.386518816Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hosts",
	        "LogPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65-json.log",
	        "Name": "/newest-cni-450938",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-450938:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-450938",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	                "LowerDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-450938",
	                "Source": "/var/lib/docker/volumes/newest-cni-450938/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-450938",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-450938",
	                "name.minikube.sigs.k8s.io": "newest-cni-450938",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53b7af9d36189bc075fe07c3b0e2530c19a08a8195afda92335ea20af6a0ae37",
	            "SandboxKey": "/var/run/docker/netns/53b7af9d3618",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34659"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34660"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34663"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34661"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34662"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-450938": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:67:74:12:6c:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "961937bd6f37532287f488797e74382e326ca0852d2ef3f8a1d23a546f1f7d1a",
	                    "EndpointID": "959007b5102d8f520c150f1b38dcce2db8d49e04ba955be8676da8afebfb51e3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-450938",
	                        "e2dde4cac2e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938: exit status 6 (339.7994ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:11:46.660513 2087605 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-450938" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-450938 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-092028 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-092028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:57 UTC │
	│ start   │ -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:57 UTC │ 16 Dec 25 03:58 UTC │
	│ image   │ embed-certs-092028 image list --format=json                                                                                                                                                                                                                │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ pause   │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ unpause │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:02 UTC │                     │
	│ stop    │ -p no-preload-255023 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ addons  │ enable dashboard -p no-preload-255023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ start   │ -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:04:36
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:04:36.142328 2078887 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:04:36.142562 2078887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:04:36.142588 2078887 out.go:374] Setting ErrFile to fd 2...
	I1216 04:04:36.142607 2078887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:04:36.142894 2078887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:04:36.143393 2078887 out.go:368] Setting JSON to false
	I1216 04:04:36.144368 2078887 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35221,"bootTime":1765822656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:04:36.144465 2078887 start.go:143] virtualization:  
	I1216 04:04:36.150070 2078887 out.go:179] * [no-preload-255023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:04:36.153020 2078887 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:04:36.153105 2078887 notify.go:221] Checking for updates...
	I1216 04:04:36.158759 2078887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:04:36.161685 2078887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:36.164397 2078887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:04:36.167148 2078887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:04:36.169926 2078887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:04:36.173114 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:36.173672 2078887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:04:36.208296 2078887 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:04:36.208429 2078887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:04:36.272451 2078887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:04:36.263127415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:04:36.272558 2078887 docker.go:319] overlay module found
	I1216 04:04:36.275603 2078887 out.go:179] * Using the docker driver based on existing profile
	I1216 04:04:36.278393 2078887 start.go:309] selected driver: docker
	I1216 04:04:36.278413 2078887 start.go:927] validating driver "docker" against &{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:36.278512 2078887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:04:36.279246 2078887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:04:36.337226 2078887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:04:36.327670673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:04:36.337567 2078887 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:04:36.337598 2078887 cni.go:84] Creating CNI manager for ""
	I1216 04:04:36.337648 2078887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:04:36.337694 2078887 start.go:353] cluster config:
	{Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:36.342771 2078887 out.go:179] * Starting "no-preload-255023" primary control-plane node in "no-preload-255023" cluster
	I1216 04:04:36.345786 2078887 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:04:36.348879 2078887 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:04:36.351831 2078887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:04:36.352008 2078887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 04:04:36.352389 2078887 cache.go:107] acquiring lock: {Name:mk0450325aacc7460afde2487596c0895eb14316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352472 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 04:04:36.352485 2078887 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.485µs
	I1216 04:04:36.352508 2078887 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 04:04:36.352528 2078887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:04:36.352738 2078887 cache.go:107] acquiring lock: {Name:mkc870fc6c12b387ee25e1b9ca9a320632395941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352823 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1216 04:04:36.352838 2078887 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 104.014µs
	I1216 04:04:36.352845 2078887 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1216 04:04:36.352864 2078887 cache.go:107] acquiring lock: {Name:mk6b703a23a3ab5a8bd9af36cf3a59f27d4e1f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352901 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1216 04:04:36.352910 2078887 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 48.311µs
	I1216 04:04:36.352917 2078887 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 04:04:36.352934 2078887 cache.go:107] acquiring lock: {Name:mk60dd72305503c0ea2e16b1d16ccd8081a54f90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.352967 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1216 04:04:36.352983 2078887 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 49.427µs
	I1216 04:04:36.352990 2078887 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 04:04:36.353002 2078887 cache.go:107] acquiring lock: {Name:mk6fa36dfa510ec7b8233463c2d901c70484a816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353044 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1216 04:04:36.353053 2078887 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 53.111µs
	I1216 04:04:36.353060 2078887 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 04:04:36.353096 2078887 cache.go:107] acquiring lock: {Name:mk65b0b8ff216fe2e0c76a8328b4837c4b65b152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353150 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1216 04:04:36.353161 2078887 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 80.704µs
	I1216 04:04:36.353167 2078887 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1216 04:04:36.353185 2078887 cache.go:107] acquiring lock: {Name:mk91af5531a8fba3ae1331bf11e776d4365c8b42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353224 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1216 04:04:36.353234 2078887 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 51.182µs
	I1216 04:04:36.353241 2078887 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1216 04:04:36.353254 2078887 cache.go:107] acquiring lock: {Name:mke4e5785550dce8ce0ae772cb7060b431e39dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.353286 2078887 cache.go:115] /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1216 04:04:36.353295 2078887 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.321µs
	I1216 04:04:36.353301 2078887 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1216 04:04:36.353308 2078887 cache.go:87] Successfully saved all images to host disk.
	I1216 04:04:36.371527 2078887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:04:36.371552 2078887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:04:36.371574 2078887 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:04:36.371606 2078887 start.go:360] acquireMachinesLock for no-preload-255023: {Name:mkc3fbe159f35ba61346866b1384afc1dc23074c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:04:36.371679 2078887 start.go:364] duration metric: took 52.75µs to acquireMachinesLock for "no-preload-255023"
	I1216 04:04:36.371703 2078887 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:04:36.371713 2078887 fix.go:54] fixHost starting: 
	I1216 04:04:36.371983 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:36.389190 2078887 fix.go:112] recreateIfNeeded on no-preload-255023: state=Stopped err=<nil>
	W1216 04:04:36.389224 2078887 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:04:36.392475 2078887 out.go:252] * Restarting existing docker container for "no-preload-255023" ...
	I1216 04:04:36.392575 2078887 cli_runner.go:164] Run: docker start no-preload-255023
	I1216 04:04:36.686743 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:36.709621 2078887 kic.go:430] container "no-preload-255023" state is running.
	I1216 04:04:36.710033 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:36.742909 2078887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/config.json ...
	I1216 04:04:36.743215 2078887 machine.go:94] provisionDockerMachine start ...
	I1216 04:04:36.743307 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:36.771624 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:36.772082 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:36.772113 2078887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:04:36.772685 2078887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51402->127.0.0.1:34664: read: connection reset by peer
	I1216 04:04:39.911257 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 04:04:39.911284 2078887 ubuntu.go:182] provisioning hostname "no-preload-255023"
	I1216 04:04:39.911351 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:39.929644 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:39.929951 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:39.929968 2078887 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-255023 && echo "no-preload-255023" | sudo tee /etc/hostname
	I1216 04:04:40.091006 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-255023
	
	I1216 04:04:40.091150 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.111323 2078887 main.go:143] libmachine: Using SSH client type: native
	I1216 04:04:40.111660 2078887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34664 <nil> <nil>}
	I1216 04:04:40.111686 2078887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-255023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-255023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-255023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:04:40.255648 2078887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:04:40.255679 2078887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:04:40.255708 2078887 ubuntu.go:190] setting up certificates
	I1216 04:04:40.255719 2078887 provision.go:84] configureAuth start
	I1216 04:04:40.255800 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:40.274459 2078887 provision.go:143] copyHostCerts
	I1216 04:04:40.274544 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:04:40.274559 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:04:40.274643 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:04:40.274749 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:04:40.274761 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:04:40.274788 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:04:40.274850 2078887 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:04:40.274858 2078887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:04:40.274882 2078887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:04:40.274932 2078887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.no-preload-255023 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-255023]
	I1216 04:04:40.540362 2078887 provision.go:177] copyRemoteCerts
	I1216 04:04:40.540434 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:04:40.540481 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.560258 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.658891 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:04:40.677291 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:04:40.696276 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 04:04:40.714152 2078887 provision.go:87] duration metric: took 458.418313ms to configureAuth
	I1216 04:04:40.714179 2078887 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:04:40.714393 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:40.714406 2078887 machine.go:97] duration metric: took 3.971173434s to provisionDockerMachine
	I1216 04:04:40.714414 2078887 start.go:293] postStartSetup for "no-preload-255023" (driver="docker")
	I1216 04:04:40.714431 2078887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:04:40.714490 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:04:40.714532 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.731640 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.827149 2078887 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:04:40.830526 2078887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:04:40.830554 2078887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:04:40.830567 2078887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:04:40.830622 2078887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:04:40.830706 2078887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:04:40.830809 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:04:40.838400 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:04:40.856071 2078887 start.go:296] duration metric: took 141.636209ms for postStartSetup
	I1216 04:04:40.856173 2078887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:04:40.856212 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.873995 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:40.968232 2078887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:04:40.973380 2078887 fix.go:56] duration metric: took 4.601659976s for fixHost
	I1216 04:04:40.973407 2078887 start.go:83] releasing machines lock for "no-preload-255023", held for 4.601715131s
	I1216 04:04:40.973483 2078887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-255023
	I1216 04:04:40.991467 2078887 ssh_runner.go:195] Run: cat /version.json
	I1216 04:04:40.991532 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:40.991607 2078887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:04:40.991672 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:41.016410 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:41.023238 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:41.219027 2078887 ssh_runner.go:195] Run: systemctl --version
	I1216 04:04:41.225735 2078887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:04:41.231530 2078887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:04:41.231614 2078887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:04:41.245369 2078887 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:04:41.245409 2078887 start.go:496] detecting cgroup driver to use...
	I1216 04:04:41.245441 2078887 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:04:41.245491 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:04:41.264763 2078887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:04:41.278940 2078887 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:04:41.279078 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:04:41.295177 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:04:41.308854 2078887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:04:41.425808 2078887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:04:41.539126 2078887 docker.go:234] disabling docker service ...
	I1216 04:04:41.539232 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:04:41.555103 2078887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:04:41.569579 2078887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:04:41.697114 2078887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:04:41.825875 2078887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:04:41.840190 2078887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:04:41.856382 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:04:41.866837 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:04:41.876037 2078887 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:04:41.876170 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:04:41.885348 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:04:41.894763 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:04:41.904382 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:04:41.913120 2078887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:04:41.922033 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:04:41.931520 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:04:41.940760 2078887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:04:41.953916 2078887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:04:41.967109 2078887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:04:41.975264 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:42.127685 2078887 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:04:42.257333 2078887 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:04:42.257501 2078887 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:04:42.262740 2078887 start.go:564] Will wait 60s for crictl version
	I1216 04:04:42.262889 2078887 ssh_runner.go:195] Run: which crictl
	I1216 04:04:42.267776 2078887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:04:42.299498 2078887 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:04:42.299668 2078887 ssh_runner.go:195] Run: containerd --version
	I1216 04:04:42.325553 2078887 ssh_runner.go:195] Run: containerd --version
	I1216 04:04:42.351925 2078887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:04:42.355177 2078887 cli_runner.go:164] Run: docker network inspect no-preload-255023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:04:42.376901 2078887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1216 04:04:42.381129 2078887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:04:42.391782 2078887 kubeadm.go:884] updating cluster {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:04:42.391898 2078887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:04:42.391946 2078887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:04:42.421381 2078887 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:04:42.421429 2078887 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:04:42.421437 2078887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:04:42.421531 2078887 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-255023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:04:42.421601 2078887 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:04:42.451031 2078887 cni.go:84] Creating CNI manager for ""
	I1216 04:04:42.451088 2078887 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:04:42.451111 2078887 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:04:42.451134 2078887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-255023 NodeName:no-preload-255023 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:04:42.451548 2078887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-255023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:04:42.451660 2078887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:04:42.462557 2078887 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:04:42.462665 2078887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:04:42.470706 2078887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:04:42.484036 2078887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:04:42.496679 2078887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1216 04:04:42.510060 2078887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:04:42.514034 2078887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:04:42.523944 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:42.642280 2078887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:04:42.658128 2078887 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023 for IP: 192.168.85.2
	I1216 04:04:42.658161 2078887 certs.go:195] generating shared ca certs ...
	I1216 04:04:42.658178 2078887 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:42.658357 2078887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:04:42.658425 2078887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:04:42.658440 2078887 certs.go:257] generating profile certs ...
	I1216 04:04:42.658560 2078887 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.key
	I1216 04:04:42.658648 2078887 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key.f898ebc5
	I1216 04:04:42.658713 2078887 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key
	I1216 04:04:42.658847 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:04:42.658904 2078887 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:04:42.658920 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:04:42.658963 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:04:42.659011 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:04:42.659085 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:04:42.659170 2078887 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:04:42.659889 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:04:42.682344 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:04:42.731773 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:04:42.759464 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:04:42.781713 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:04:42.800339 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:04:42.819107 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:04:42.837811 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 04:04:42.856139 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:04:42.873711 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:04:42.892395 2078887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:04:42.910549 2078887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:04:42.924760 2078887 ssh_runner.go:195] Run: openssl version
	I1216 04:04:42.931736 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.940294 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:04:42.948204 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.952285 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.952396 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:04:42.993553 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:04:43.001452 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.010861 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:04:43.019267 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.023881 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.023989 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:04:43.065733 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:04:43.074014 2078887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.082044 2078887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:04:43.090335 2078887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.094833 2078887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.094908 2078887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:04:43.137155 2078887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:04:43.145351 2078887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:04:43.149907 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:04:43.192388 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:04:43.235812 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:04:43.277441 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:04:43.318805 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:04:43.360025 2078887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:04:43.402731 2078887 kubeadm.go:401] StartCluster: {Name:no-preload-255023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-255023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:04:43.402829 2078887 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:04:43.402928 2078887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:04:43.429949 2078887 cri.go:89] found id: ""
	I1216 04:04:43.430063 2078887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:04:43.452392 2078887 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:04:43.452428 2078887 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:04:43.452517 2078887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:04:43.466566 2078887 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:04:43.467070 2078887 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-255023" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:43.467226 2078887 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-255023" cluster setting kubeconfig missing "no-preload-255023" context setting]
	I1216 04:04:43.467608 2078887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.469283 2078887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:04:43.485730 2078887 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1216 04:04:43.485775 2078887 kubeadm.go:602] duration metric: took 33.340688ms to restartPrimaryControlPlane
	I1216 04:04:43.485805 2078887 kubeadm.go:403] duration metric: took 83.08421ms to StartCluster
	I1216 04:04:43.485836 2078887 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.485913 2078887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:04:43.486639 2078887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:04:43.486917 2078887 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:04:43.487330 2078887 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:04:43.487405 2078887 addons.go:70] Setting storage-provisioner=true in profile "no-preload-255023"
	I1216 04:04:43.487422 2078887 addons.go:239] Setting addon storage-provisioner=true in "no-preload-255023"
	I1216 04:04:43.487445 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.488102 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.488423 2078887 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:04:43.488504 2078887 addons.go:70] Setting dashboard=true in profile "no-preload-255023"
	I1216 04:04:43.488521 2078887 addons.go:239] Setting addon dashboard=true in "no-preload-255023"
	W1216 04:04:43.488541 2078887 addons.go:248] addon dashboard should already be in state true
	I1216 04:04:43.488579 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.489074 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.491900 2078887 addons.go:70] Setting default-storageclass=true in profile "no-preload-255023"
	I1216 04:04:43.491932 2078887 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-255023"
	I1216 04:04:43.492873 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.492947 2078887 out.go:179] * Verifying Kubernetes components...
	I1216 04:04:43.501909 2078887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:04:43.540041 2078887 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 04:04:43.544945 2078887 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 04:04:43.547811 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 04:04:43.547843 2078887 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 04:04:43.547914 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.559171 2078887 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:04:43.559336 2078887 addons.go:239] Setting addon default-storageclass=true in "no-preload-255023"
	I1216 04:04:43.559370 2078887 host.go:66] Checking if "no-preload-255023" exists ...
	I1216 04:04:43.559803 2078887 cli_runner.go:164] Run: docker container inspect no-preload-255023 --format={{.State.Status}}
	I1216 04:04:43.563234 2078887 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:43.563261 2078887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:04:43.563329 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.613200 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.627516 2078887 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:04:43.627538 2078887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:04:43.627600 2078887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-255023
	I1216 04:04:43.647225 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.663344 2078887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34664 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/no-preload-255023/id_rsa Username:docker}
	I1216 04:04:43.730458 2078887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:04:43.761779 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 04:04:43.761800 2078887 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 04:04:43.776412 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 04:04:43.776431 2078887 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 04:04:43.790891 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 04:04:43.790913 2078887 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 04:04:43.792062 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:43.811412 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 04:04:43.811477 2078887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 04:04:43.827119 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 04:04:43.827185 2078887 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 04:04:43.838623 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:04:43.851891 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 04:04:43.851965 2078887 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 04:04:43.868373 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 04:04:43.868445 2078887 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 04:04:43.883425 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 04:04:43.883498 2078887 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 04:04:43.898225 2078887 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:43.898297 2078887 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 04:04:43.913600 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:44.438735 2078887 node_ready.go:35] waiting up to 6m0s for node "no-preload-255023" to be "Ready" ...
	W1216 04:04:44.439130 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439312 2078887 retry.go:31] will retry after 305.613762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.439316 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439337 2078887 retry.go:31] will retry after 363.187652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.439533 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.439559 2078887 retry.go:31] will retry after 272.903595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.713163 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:44.745739 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:44.781147 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.781180 2078887 retry.go:31] will retry after 329.721194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.803439 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:44.821890 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.821920 2078887 retry.go:31] will retry after 342.537223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:44.869557 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:44.869592 2078887 retry.go:31] will retry after 400.087881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.112248 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:04:45.165426 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:45.247199 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.247273 2078887 retry.go:31] will retry after 632.091254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.270745 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:45.301341 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.301432 2078887 retry.go:31] will retry after 431.279641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:45.357125 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.357163 2078887 retry.go:31] will retry after 448.988888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.733393 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:45.794896 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.794941 2078887 retry.go:31] will retry after 735.19991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.807205 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:45.867083 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.867117 2078887 retry.go:31] will retry after 568.360561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.880293 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:45.942564 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:45.942654 2078887 retry.go:31] will retry after 591.592305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.436264 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:46.439868 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:04:46.515391 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.515423 2078887 retry.go:31] will retry after 863.502918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.530605 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:04:46.535089 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:46.607927 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.607962 2078887 retry.go:31] will retry after 1.115944939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:46.613433 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:46.613467 2078887 retry.go:31] will retry after 961.68966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.379736 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:47.458969 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.459090 2078887 retry.go:31] will retry after 1.606575866s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.575407 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:47.642476 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.642514 2078887 retry.go:31] will retry after 2.560273252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.724901 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:47.785232 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:47.785270 2078887 retry.go:31] will retry after 2.616642999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:48.939418 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:49.066818 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:49.131769 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:49.131806 2078887 retry.go:31] will retry after 3.366815571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.203910 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:50.281554 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.281591 2078887 retry.go:31] will retry after 3.322699521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.403034 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:50.475418 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:50.475451 2078887 retry.go:31] will retry after 3.920781833s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:50.940166 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:52.499306 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:52.566228 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:52.566262 2078887 retry.go:31] will retry after 2.315880156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:53.440268 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:53.604610 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:53.664371 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:53.664411 2078887 retry.go:31] will retry after 4.867931094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.396477 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:04:54.458906 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.458940 2078887 retry.go:31] will retry after 6.25682185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.882414 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:04:54.945906 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:54.945940 2078887 retry.go:31] will retry after 8.419891658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:04:55.939826 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:04:58.439821 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:04:58.533209 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:04:58.597277 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:04:58.597315 2078887 retry.go:31] will retry after 8.821330278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:00.440193 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:00.716680 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:00.792490 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:00.792525 2078887 retry.go:31] will retry after 4.988340186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:02.939239 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:03.366954 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:03.427635 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:03.427664 2078887 retry.go:31] will retry after 11.977275357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:04.939595 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:05.781026 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:05.843492 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:05.843528 2078887 retry.go:31] will retry after 12.145550583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:06.939757 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:07.419555 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:07.505584 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:07.505621 2078887 retry.go:31] will retry after 12.780052365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:08.940202 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:11.440118 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:13.940295 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:15.405274 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:15.464480 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:15.464513 2078887 retry.go:31] will retry after 7.284769957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:16.439936 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:17.989703 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:18.058004 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:18.058043 2078887 retry.go:31] will retry after 16.677849322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:18.440048 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:20.286526 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:20.345776 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:20.345812 2078887 retry.go:31] will retry after 16.385541559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:20.939362 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:22.749528 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:22.811867 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:22.811905 2078887 retry.go:31] will retry after 14.258552084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:22.939418 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:25.439972 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:27.940078 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:30.440042 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:32.939848 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:34.736331 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:05:34.794887 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:34.794921 2078887 retry.go:31] will retry after 31.126157271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:35.439532 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:36.732300 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:05:36.795769 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:36.795804 2078887 retry.go:31] will retry after 23.567098644s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:37.070890 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:37.130033 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:05:37.130066 2078887 retry.go:31] will retry after 22.575569039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:37.439758 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:39.439923 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:41.939932 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:44.439453 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:46.440129 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:48.939948 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:51.440009 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:53.939968 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:57.654978 2073073 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000290251s
	I1216 04:05:57.655319 2073073 kubeadm.go:319] 
	I1216 04:05:57.655442 2073073 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:05:57.655501 2073073 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:05:57.655753 2073073 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:05:57.655760 2073073 kubeadm.go:319] 
	I1216 04:05:57.656095 2073073 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:05:57.656161 2073073 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:05:57.656330 2073073 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:05:57.656339 2073073 kubeadm.go:319] 
	I1216 04:05:57.661429 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:05:57.661908 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:05:57.662048 2073073 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:05:57.662311 2073073 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:05:57.662326 2073073 kubeadm.go:319] 
	I1216 04:05:57.662412 2073073 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:05:57.662579 2073073 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450938] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000290251s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:05:57.662661 2073073 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1216 04:05:58.084120 2073073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:05:58.098877 2073073 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:05:58.098960 2073073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:05:58.107810 2073073 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:05:58.107839 2073073 kubeadm.go:158] found existing configuration files:
	
	I1216 04:05:58.107907 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:05:58.116252 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:05:58.116319 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:05:58.123966 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:05:58.131928 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:05:58.131999 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:05:58.139938 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:05:58.148354 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:05:58.148421 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:05:58.155951 2073073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:05:58.163949 2073073 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:05:58.164019 2073073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:05:58.172134 2073073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:05:58.209714 2073073 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:05:58.209936 2073073 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:05:58.280761 2073073 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:05:58.280869 2073073 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:05:58.280943 2073073 kubeadm.go:319] OS: Linux
	I1216 04:05:58.281014 2073073 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:05:58.281081 2073073 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:05:58.281135 2073073 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:05:58.281192 2073073 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:05:58.281251 2073073 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:05:58.281316 2073073 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:05:58.281370 2073073 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:05:58.281425 2073073 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:05:58.281480 2073073 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:05:58.347935 2073073 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:05:58.348070 2073073 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:05:58.348235 2073073 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:05:58.355578 2073073 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:05:58.361023 2073073 out.go:252]   - Generating certificates and keys ...
	I1216 04:05:58.361193 2073073 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:05:58.361322 2073073 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:05:58.361438 2073073 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:05:58.361549 2073073 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:05:58.361663 2073073 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:05:58.367384 2073073 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:05:58.367458 2073073 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:05:58.367521 2073073 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:05:58.367595 2073073 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:05:58.367668 2073073 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:05:58.367706 2073073 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:05:58.367762 2073073 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:05:58.550047 2073073 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:05:59.040542 2073073 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:05:59.832816 2073073 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:06:00.196554 2073073 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:06:00.344590 2073073 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:06:00.344735 2073073 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:06:00.344804 2073073 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:06:00.348130 2073073 out.go:252]   - Booting up control plane ...
	I1216 04:06:00.348264 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:06:00.348345 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:06:00.348416 2073073 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	W1216 04:05:56.439836 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:05:58.939363 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:05:59.706828 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:05:59.787641 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:05:59.787749 2078887 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:06:00.368445 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:06:00.461740 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:06:00.461775 2078887 retry.go:31] will retry after 38.977225184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:00.939472 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:00.386806 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:06:00.386918 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:06:00.399084 2073073 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:06:00.400082 2073073 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:06:00.400137 2073073 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:06:00.555518 2073073 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:06:00.555632 2073073 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1216 04:06:03.439401 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:05.439853 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:05.921308 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:06:06.013608 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:06:06.013643 2078887 retry.go:31] will retry after 27.262873571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:07.440089 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:09.440233 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:11.939830 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:13.940070 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:16.440297 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:18.940013 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:21.439376 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:23.440046 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:25.440167 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:27.939439 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:29.939765 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:31.940029 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:33.277682 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:06:33.336094 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:33.336187 2078887 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1216 04:06:34.439449 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:36.440148 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:38.939781 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:06:39.439610 2078887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:06:39.498473 2078887 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:06:39.498576 2078887 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:06:39.501951 2078887 out.go:179] * Enabled addons: 
	I1216 04:06:39.505615 2078887 addons.go:530] duration metric: took 1m56.018282146s for enable addons: enabled=[]
	W1216 04:06:40.939880 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:42.942826 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:45.439332 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:47.439764 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:49.439998 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:51.939468 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:53.940191 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:56.440248 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:06:58.940043 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:01.439817 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:03.440122 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:05.940233 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:08.440149 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:10.440203 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:12.940001 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:15.439450 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:17.940353 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:20.439655 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:22.939251 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:24.939985 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:26.940134 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:29.439278 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:31.440124 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:33.940184 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:36.440157 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:38.940079 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:41.440316 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:43.939340 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:45.939890 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:47.940062 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:50.439942 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:52.440011 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:54.440323 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:56.939899 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:07:59.439323 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:01.439378 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:03.939463 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:06.439343 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:08.440329 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:10.939526 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:13.439382 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:15.439798 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:17.939298 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:20.439304 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:22.439457 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:24.939305 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:26.940098 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:28.940259 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:31.439311 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:33.440174 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:35.939344 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:37.940078 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:40.439899 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:42.939335 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:44.939411 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:46.939890 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:49.439873 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:51.440082 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:53.939432 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:55.939868 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:08:57.940176 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:00.440304 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:02.939752 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:05.439377 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:07.939797 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:09.939985 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:12.439307 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:14.439467 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:16.939931 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:18.940258 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:21.439950 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:23.440228 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:25.939594 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:27.940217 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:30.439353 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:32.439932 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:34.939928 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:36.940259 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:39.439438 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:41.440249 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:43.939955 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:45.940052 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:48.439303 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:50.940139 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:52.940243 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:09:55.440234 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	I1216 04:10:00.553098 2073073 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000446161s
	I1216 04:10:00.553139 2073073 kubeadm.go:319] 
	I1216 04:10:00.553240 2073073 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:10:00.553447 2073073 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:10:00.553632 2073073 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:10:00.553642 2073073 kubeadm.go:319] 
	I1216 04:10:00.554240 2073073 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:10:00.554310 2073073 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:10:00.554364 2073073 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:10:00.554369 2073073 kubeadm.go:319] 
	I1216 04:10:00.559672 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:10:00.560440 2073073 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:10:00.560638 2073073 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:10:00.560897 2073073 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:10:00.560908 2073073 kubeadm.go:319] 
	I1216 04:10:00.561045 2073073 kubeadm.go:403] duration metric: took 8m7.05045578s to StartCluster
	I1216 04:10:00.561088 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:10:00.561095 2073073 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:10:00.561160 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:10:00.591750 2073073 cri.go:89] found id: ""
	I1216 04:10:00.591842 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.591857 2073073 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:10:00.591866 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:10:00.591936 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:10:00.621403 2073073 cri.go:89] found id: ""
	I1216 04:10:00.621441 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.621454 2073073 logs.go:284] No container was found matching "etcd"
	I1216 04:10:00.621463 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:10:00.621538 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:10:00.650404 2073073 cri.go:89] found id: ""
	I1216 04:10:00.650434 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.650444 2073073 logs.go:284] No container was found matching "coredns"
	I1216 04:10:00.650451 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:10:00.650524 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:10:00.680445 2073073 cri.go:89] found id: ""
	I1216 04:10:00.680521 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.680536 2073073 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:10:00.680543 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:10:00.680611 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:10:00.710361 2073073 cri.go:89] found id: ""
	I1216 04:10:00.710396 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.710406 2073073 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:10:00.710412 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:10:00.710473 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:10:00.748242 2073073 cri.go:89] found id: ""
	I1216 04:10:00.748318 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.748352 2073073 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:10:00.748389 2073073 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:10:00.748488 2073073 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:10:00.780257 2073073 cri.go:89] found id: ""
	I1216 04:10:00.780341 2073073 logs.go:282] 0 containers: []
	W1216 04:10:00.780365 2073073 logs.go:284] No container was found matching "kindnet"
	I1216 04:10:00.780402 2073073 logs.go:123] Gathering logs for kubelet ...
	I1216 04:10:00.780432 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:10:00.837018 2073073 logs.go:123] Gathering logs for dmesg ...
	I1216 04:10:00.837057 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:10:00.854084 2073073 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:10:00.854114 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:10:00.921357 2073073 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:10:00.911353    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.912410    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.913336    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915304    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915802    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:10:00.911353    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.912410    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.913336    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915304    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:10:00.915802    4790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:10:00.921436 2073073 logs.go:123] Gathering logs for containerd ...
	I1216 04:10:00.921463 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:10:00.962148 2073073 logs.go:123] Gathering logs for container status ...
	I1216 04:10:00.962187 2073073 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:10:00.992078 2073073 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:10:00.992139 2073073 out.go:285] * 
	W1216 04:10:00.992191 2073073 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:10:00.992219 2073073 out.go:285] * 
	W1216 04:10:00.994876 2073073 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:10:01.000867 2073073 out.go:203] 
	W1216 04:10:01.005125 2073073 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000446161s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:10:01.005428 2073073 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:10:01.005513 2073073 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:10:01.011350 2073073 out.go:203] 
	W1216 04:09:57.939424 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:00.445310 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:02.940226 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:05.440272 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:07.939309 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:09.939394 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:12.439365 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:14.440073 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:16.939954 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:19.439321 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:21.440142 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:23.440223 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:25.939694 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:27.940085 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:30.440080 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:32.939316 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:34.939494 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:36.940361 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:39.439999 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:41.939464 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-255023": dial tcp 192.168.85.2:8443: connect: connection refused
	W1216 04:10:44.438951 2078887 node_ready.go:55] error getting node "no-preload-255023" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1216 04:10:44.438989 2078887 node_ready.go:38] duration metric: took 6m0.000194453s for node "no-preload-255023" to be "Ready" ...
	I1216 04:10:44.442224 2078887 out.go:203] 
	W1216 04:10:44.445097 2078887 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 04:10:44.445127 2078887 out.go:285] * 
	W1216 04:10:44.447308 2078887 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:10:44.450299 2078887 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934013777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934085701Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934187689Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934261254Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934323858Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934386954Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934441985Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934504367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934572230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.934660499Z" level=info msg="Connect containerd service"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.935078270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.935783846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.946850869Z" level=info msg="Start subscribing containerd event"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.947102868Z" level=info msg="Start recovering state"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.947203042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.947265121Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990839081Z" level=info msg="Start event monitor"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990896319Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990906747Z" level=info msg="Start streaming server"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990916880Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990926283Z" level=info msg="runtime interface starting up..."
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990938386Z" level=info msg="starting plugins..."
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.990951070Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:01:51 newest-cni-450938 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 16 04:01:51 newest-cni-450938 containerd[756]: time="2025-12-16T04:01:51.992912455Z" level=info msg="containerd successfully booted in 0.086898s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:11:47.325438    5934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:11:47.326162    5934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:11:47.327837    5934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:11:47.328412    5934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:11:47.329406    5934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:11:47 up  9:54,  0 user,  load average: 0.35, 0.63, 1.31
	Linux newest-cni-450938 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:11:44 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:11:44 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 16 04:11:44 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:11:44 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:11:44 newest-cni-450938 kubelet[5811]: E1216 04:11:44.982283    5811 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:11:44 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:11:44 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:11:45 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 460.
	Dec 16 04:11:45 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:11:45 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:11:45 newest-cni-450938 kubelet[5817]: E1216 04:11:45.742559    5817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:11:45 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:11:45 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:11:46 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 16 04:11:46 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:11:46 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:11:46 newest-cni-450938 kubelet[5829]: E1216 04:11:46.503692    5829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:11:46 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:11:46 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:11:47 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 16 04:11:47 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:11:47 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:11:47 newest-cni-450938 kubelet[5914]: E1216 04:11:47.247820    5914 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:11:47 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:11:47 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938: exit status 6 (349.897122ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:11:47.872322 2087828 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-450938" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-450938" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (105.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:10:51.131190 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:13:44.847258 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:13:51.134131 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:14:44.438131 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:15:05.534373 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:15:51.130840 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:16:07.506579 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:18:34.222944 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:18:44.846823 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:18:51.133760 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:19:44.437478 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 2 (313.630511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-255023
helpers_test.go:244: (dbg) docker inspect no-preload-255023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	        "Created": "2025-12-16T03:54:15.810217174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2079014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:04:36.43296942Z",
	            "FinishedAt": "2025-12-16T04:04:35.01536344Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e-json.log",
	        "Name": "/no-preload-255023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-255023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-255023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	                "LowerDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-255023",
	                "Source": "/var/lib/docker/volumes/no-preload-255023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-255023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-255023",
	                "name.minikube.sigs.k8s.io": "no-preload-255023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8d77d7563a5b808d67c856f8fa0badaaabd481cb09d94e5909e754d7a8568f2",
	            "SandboxKey": "/var/run/docker/netns/e8d77d7563a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34664"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34665"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34668"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34666"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34667"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-255023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:af:07:e2:16:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba784dbb0bf675265a222a2ccbfc260249ee6464ab188d5ef5e9194204ab459f",
	                    "EndpointID": "d7abbd133c0576ac3aee0fa6c955e27a282475749fdbc6a2ade67d17e9ffc12d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-255023",
	                        "9e19dbb9154c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023: exit status 2 (332.856725ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-255023 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:02 UTC │                     │
	│ stop    │ -p no-preload-255023 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ addons  │ enable dashboard -p no-preload-255023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ start   │ -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	│ stop    │ -p newest-cni-450938 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-450938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ image   │ newest-cni-450938 image list --format=json                                                                                                                                                                                                                 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ pause   │ -p newest-cni-450938 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ unpause │ -p newest-cni-450938 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ delete  │ -p newest-cni-450938                                                                                                                                                                                                                                       │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ delete  │ -p newest-cni-450938                                                                                                                                                                                                                                       │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ start   │ -p auto-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                              │ auto-167684                  │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:19 UTC │
	│ ssh     │ -p auto-167684 pgrep -a kubelet                                                                                                                                                                                                                            │ auto-167684                  │ jenkins │ v1.37.0 │ 16 Dec 25 04:19 UTC │ 16 Dec 25 04:19 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:18:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:18:16.054172 2106560 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:18:16.054380 2106560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:18:16.054408 2106560 out.go:374] Setting ErrFile to fd 2...
	I1216 04:18:16.054458 2106560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:18:16.054760 2106560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:18:16.055279 2106560 out.go:368] Setting JSON to false
	I1216 04:18:16.056211 2106560 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":36040,"bootTime":1765822656,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:18:16.056332 2106560 start.go:143] virtualization:  
	I1216 04:18:16.060716 2106560 out.go:179] * [auto-167684] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:18:16.064925 2106560 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:18:16.065060 2106560 notify.go:221] Checking for updates...
	I1216 04:18:16.071139 2106560 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:18:16.074147 2106560 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:18:16.077162 2106560 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:18:16.080265 2106560 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:18:16.083334 2106560 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:18:16.086872 2106560 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:18:16.086989 2106560 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:18:16.112171 2106560 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:18:16.112303 2106560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:18:16.172691 2106560 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:18:16.162872127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:18:16.172800 2106560 docker.go:319] overlay module found
	I1216 04:18:16.176090 2106560 out.go:179] * Using the docker driver based on user configuration
	I1216 04:18:16.178984 2106560 start.go:309] selected driver: docker
	I1216 04:18:16.179007 2106560 start.go:927] validating driver "docker" against <nil>
	I1216 04:18:16.179078 2106560 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:18:16.179872 2106560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:18:16.271325 2106560 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:18:16.26152176 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:18:16.271485 2106560 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:18:16.271711 2106560 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:18:16.274744 2106560 out.go:179] * Using Docker driver with root privileges
	I1216 04:18:16.277710 2106560 cni.go:84] Creating CNI manager for ""
	I1216 04:18:16.277790 2106560 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:18:16.277807 2106560 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:18:16.277901 2106560 start.go:353] cluster config:
	{Name:auto-167684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-167684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:18:16.283029 2106560 out.go:179] * Starting "auto-167684" primary control-plane node in "auto-167684" cluster
	I1216 04:18:16.286016 2106560 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:18:16.288977 2106560 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:18:16.291929 2106560 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1216 04:18:16.291981 2106560 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1216 04:18:16.291994 2106560 cache.go:65] Caching tarball of preloaded images
	I1216 04:18:16.292020 2106560 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:18:16.292090 2106560 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:18:16.292101 2106560 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1216 04:18:16.292211 2106560 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/config.json ...
	I1216 04:18:16.292229 2106560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/config.json: {Name:mk9d1841776cf5454099c9a4580faa207acd30d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:16.311425 2106560 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:18:16.311448 2106560 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:18:16.311463 2106560 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:18:16.311493 2106560 start.go:360] acquireMachinesLock for auto-167684: {Name:mk2a1fc3f823ebef0931666f6a5e801548db6439 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:18:16.311602 2106560 start.go:364] duration metric: took 85.857µs to acquireMachinesLock for "auto-167684"
	I1216 04:18:16.311632 2106560 start.go:93] Provisioning new machine with config: &{Name:auto-167684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-167684 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:18:16.311707 2106560 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:18:16.315212 2106560 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 04:18:16.315430 2106560 start.go:159] libmachine.API.Create for "auto-167684" (driver="docker")
	I1216 04:18:16.315468 2106560 client.go:173] LocalClient.Create starting
	I1216 04:18:16.315537 2106560 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 04:18:16.315581 2106560 main.go:143] libmachine: Decoding PEM data...
	I1216 04:18:16.315600 2106560 main.go:143] libmachine: Parsing certificate...
	I1216 04:18:16.315670 2106560 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 04:18:16.315692 2106560 main.go:143] libmachine: Decoding PEM data...
	I1216 04:18:16.315708 2106560 main.go:143] libmachine: Parsing certificate...
	I1216 04:18:16.316054 2106560 cli_runner.go:164] Run: docker network inspect auto-167684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:18:16.332547 2106560 cli_runner.go:211] docker network inspect auto-167684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:18:16.332641 2106560 network_create.go:284] running [docker network inspect auto-167684] to gather additional debugging logs...
	I1216 04:18:16.332662 2106560 cli_runner.go:164] Run: docker network inspect auto-167684
	W1216 04:18:16.348467 2106560 cli_runner.go:211] docker network inspect auto-167684 returned with exit code 1
	I1216 04:18:16.348498 2106560 network_create.go:287] error running [docker network inspect auto-167684]: docker network inspect auto-167684: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-167684 not found
	I1216 04:18:16.348523 2106560 network_create.go:289] output of [docker network inspect auto-167684]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-167684 not found
	
	** /stderr **
	I1216 04:18:16.348628 2106560 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:18:16.371183 2106560 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
	I1216 04:18:16.371585 2106560 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d705cdcdbc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:12:e3:47:7f:d3} reservation:<nil>}
	I1216 04:18:16.371844 2106560 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9eafaf3b4a19 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e2:6e:50:29:6c:d7} reservation:<nil>}
	I1216 04:18:16.372262 2106560 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001993e30}
	I1216 04:18:16.372285 2106560 network_create.go:124] attempt to create docker network auto-167684 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 04:18:16.372346 2106560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-167684 auto-167684
	I1216 04:18:16.431581 2106560 network_create.go:108] docker network auto-167684 192.168.76.0/24 created
	I1216 04:18:16.431618 2106560 kic.go:121] calculated static IP "192.168.76.2" for the "auto-167684" container
	I1216 04:18:16.431711 2106560 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:18:16.448029 2106560 cli_runner.go:164] Run: docker volume create auto-167684 --label name.minikube.sigs.k8s.io=auto-167684 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:18:16.465710 2106560 oci.go:103] Successfully created a docker volume auto-167684
	I1216 04:18:16.465811 2106560 cli_runner.go:164] Run: docker run --rm --name auto-167684-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-167684 --entrypoint /usr/bin/test -v auto-167684:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:18:17.008544 2106560 oci.go:107] Successfully prepared a docker volume auto-167684
	I1216 04:18:17.008610 2106560 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1216 04:18:17.008623 2106560 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:18:17.008710 2106560 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-167684:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:18:20.984148 2106560 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-167684:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.975374138s)
	I1216 04:18:20.984181 2106560 kic.go:203] duration metric: took 3.975554335s to extract preloaded images to volume ...
	W1216 04:18:20.984311 2106560 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:18:20.984426 2106560 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:18:21.040030 2106560 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-167684 --name auto-167684 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-167684 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-167684 --network auto-167684 --ip 192.168.76.2 --volume auto-167684:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:18:21.364756 2106560 cli_runner.go:164] Run: docker container inspect auto-167684 --format={{.State.Running}}
	I1216 04:18:21.388812 2106560 cli_runner.go:164] Run: docker container inspect auto-167684 --format={{.State.Status}}
	I1216 04:18:21.412995 2106560 cli_runner.go:164] Run: docker exec auto-167684 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:18:21.467360 2106560 oci.go:144] the created container "auto-167684" has a running status.
	I1216 04:18:21.467391 2106560 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa...
	I1216 04:18:21.651264 2106560 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:18:21.674103 2106560 cli_runner.go:164] Run: docker container inspect auto-167684 --format={{.State.Status}}
	I1216 04:18:21.697068 2106560 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:18:21.697089 2106560 kic_runner.go:114] Args: [docker exec --privileged auto-167684 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:18:21.765304 2106560 cli_runner.go:164] Run: docker container inspect auto-167684 --format={{.State.Status}}
	I1216 04:18:21.785446 2106560 machine.go:94] provisionDockerMachine start ...
	I1216 04:18:21.785551 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:21.808847 2106560 main.go:143] libmachine: Using SSH client type: native
	I1216 04:18:21.809180 2106560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34674 <nil> <nil>}
	I1216 04:18:21.809189 2106560 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:18:21.809905 2106560 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51948->127.0.0.1:34674: read: connection reset by peer
	I1216 04:18:24.946567 2106560 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-167684
	
	I1216 04:18:24.946595 2106560 ubuntu.go:182] provisioning hostname "auto-167684"
	I1216 04:18:24.946658 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:24.964086 2106560 main.go:143] libmachine: Using SSH client type: native
	I1216 04:18:24.964403 2106560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34674 <nil> <nil>}
	I1216 04:18:24.964424 2106560 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-167684 && echo "auto-167684" | sudo tee /etc/hostname
	I1216 04:18:25.109254 2106560 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-167684
	
	I1216 04:18:25.109359 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:25.127373 2106560 main.go:143] libmachine: Using SSH client type: native
	I1216 04:18:25.127688 2106560 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34674 <nil> <nil>}
	I1216 04:18:25.127719 2106560 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-167684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-167684/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-167684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:18:25.271591 2106560 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:18:25.271621 2106560 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:18:25.271652 2106560 ubuntu.go:190] setting up certificates
	I1216 04:18:25.271669 2106560 provision.go:84] configureAuth start
	I1216 04:18:25.271734 2106560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-167684
	I1216 04:18:25.291704 2106560 provision.go:143] copyHostCerts
	I1216 04:18:25.291771 2106560 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:18:25.291784 2106560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:18:25.291863 2106560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:18:25.291971 2106560 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:18:25.291982 2106560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:18:25.292011 2106560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:18:25.292078 2106560 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:18:25.292087 2106560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:18:25.292113 2106560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:18:25.292172 2106560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.auto-167684 san=[127.0.0.1 192.168.76.2 auto-167684 localhost minikube]
	I1216 04:18:25.395981 2106560 provision.go:177] copyRemoteCerts
	I1216 04:18:25.396063 2106560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:18:25.396123 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:25.413888 2106560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34674 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa Username:docker}
	I1216 04:18:25.510914 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:18:25.529584 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 04:18:25.547809 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:18:25.566257 2106560 provision.go:87] duration metric: took 294.564316ms to configureAuth
	I1216 04:18:25.566284 2106560 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:18:25.566483 2106560 config.go:182] Loaded profile config "auto-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 04:18:25.566499 2106560 machine.go:97] duration metric: took 3.781033583s to provisionDockerMachine
	I1216 04:18:25.566507 2106560 client.go:176] duration metric: took 9.251028488s to LocalClient.Create
	I1216 04:18:25.566522 2106560 start.go:167] duration metric: took 9.251093734s to libmachine.API.Create "auto-167684"
	I1216 04:18:25.566529 2106560 start.go:293] postStartSetup for "auto-167684" (driver="docker")
	I1216 04:18:25.566538 2106560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:18:25.566601 2106560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:18:25.566652 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:25.584651 2106560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34674 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa Username:docker}
	I1216 04:18:25.683282 2106560 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:18:25.687109 2106560 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:18:25.687143 2106560 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:18:25.687155 2106560 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:18:25.687222 2106560 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:18:25.687339 2106560 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:18:25.687477 2106560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:18:25.695176 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:18:25.713178 2106560 start.go:296] duration metric: took 146.633299ms for postStartSetup
	I1216 04:18:25.713551 2106560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-167684
	I1216 04:18:25.730859 2106560 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/config.json ...
	I1216 04:18:25.731174 2106560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:18:25.731225 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:25.748736 2106560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34674 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa Username:docker}
	I1216 04:18:25.843983 2106560 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:18:25.848491 2106560 start.go:128] duration metric: took 9.536767367s to createHost
	I1216 04:18:25.848526 2106560 start.go:83] releasing machines lock for "auto-167684", held for 9.536909878s
	I1216 04:18:25.848595 2106560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-167684
	I1216 04:18:25.865153 2106560 ssh_runner.go:195] Run: cat /version.json
	I1216 04:18:25.865175 2106560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:18:25.865219 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:25.865242 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:25.891713 2106560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34674 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa Username:docker}
	I1216 04:18:25.895290 2106560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34674 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa Username:docker}
	I1216 04:18:26.081686 2106560 ssh_runner.go:195] Run: systemctl --version
	I1216 04:18:26.088381 2106560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:18:26.092778 2106560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:18:26.092860 2106560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:18:26.120213 2106560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:18:26.120241 2106560 start.go:496] detecting cgroup driver to use...
	I1216 04:18:26.120274 2106560 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:18:26.120340 2106560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:18:26.135552 2106560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:18:26.148127 2106560 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:18:26.148245 2106560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:18:26.165869 2106560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:18:26.184523 2106560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:18:26.302028 2106560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:18:26.424481 2106560 docker.go:234] disabling docker service ...
	I1216 04:18:26.424575 2106560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:18:26.450739 2106560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:18:26.463539 2106560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:18:26.586496 2106560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:18:26.743388 2106560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:18:26.760667 2106560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:18:26.776686 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:18:26.786687 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:18:26.798204 2106560 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:18:26.798277 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:18:26.807436 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:18:26.816366 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:18:26.825545 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:18:26.834436 2106560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:18:26.842623 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:18:26.852129 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:18:26.861031 2106560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:18:26.869888 2106560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:18:26.877444 2106560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:18:26.884799 2106560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:18:27.004678 2106560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:18:27.137704 2106560 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:18:27.137777 2106560 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:18:27.141605 2106560 start.go:564] Will wait 60s for crictl version
	I1216 04:18:27.141671 2106560 ssh_runner.go:195] Run: which crictl
	I1216 04:18:27.145417 2106560 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:18:27.169362 2106560 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:18:27.169433 2106560 ssh_runner.go:195] Run: containerd --version
	I1216 04:18:27.189251 2106560 ssh_runner.go:195] Run: containerd --version
	I1216 04:18:27.215789 2106560 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1216 04:18:27.218948 2106560 cli_runner.go:164] Run: docker network inspect auto-167684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:18:27.235927 2106560 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:18:27.239555 2106560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:18:27.249771 2106560 kubeadm.go:884] updating cluster {Name:auto-167684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-167684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:18:27.249892 2106560 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1216 04:18:27.249969 2106560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:18:27.279709 2106560 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:18:27.279733 2106560 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:18:27.279794 2106560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:18:27.303674 2106560 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:18:27.303698 2106560 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:18:27.303706 2106560 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1216 04:18:27.303821 2106560 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-167684 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-167684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:18:27.303893 2106560 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:18:27.333569 2106560 cni.go:84] Creating CNI manager for ""
	I1216 04:18:27.333602 2106560 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:18:27.333618 2106560 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:18:27.333642 2106560 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-167684 NodeName:auto-167684 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:18:27.333768 2106560 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-167684"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:18:27.333845 2106560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 04:18:27.341806 2106560 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:18:27.341880 2106560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:18:27.349403 2106560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1216 04:18:27.362271 2106560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 04:18:27.375150 2106560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1216 04:18:27.388240 2106560 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:18:27.391648 2106560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:18:27.401060 2106560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:18:27.550885 2106560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:18:27.571570 2106560 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684 for IP: 192.168.76.2
	I1216 04:18:27.571634 2106560 certs.go:195] generating shared ca certs ...
	I1216 04:18:27.571667 2106560 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:27.571837 2106560 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:18:27.571909 2106560 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:18:27.571944 2106560 certs.go:257] generating profile certs ...
	I1216 04:18:27.572018 2106560 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.key
	I1216 04:18:27.572053 2106560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.crt with IP's: []
	I1216 04:18:27.801074 2106560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.crt ...
	I1216 04:18:27.801108 2106560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.crt: {Name:mk5aa481a5283393e494089ee7a66ac68f1acecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:27.801362 2106560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.key ...
	I1216 04:18:27.801378 2106560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.key: {Name:mkfce5b56d88fcd233dfd56b30ec4b159375b5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:27.801502 2106560 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.key.a3994056
	I1216 04:18:27.801522 2106560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.crt.a3994056 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 04:18:27.997169 2106560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.crt.a3994056 ...
	I1216 04:18:27.997199 2106560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.crt.a3994056: {Name:mk2a3ce9f4e9e706568ce76ffd67ac57f64a545f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:27.997376 2106560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.key.a3994056 ...
	I1216 04:18:27.997390 2106560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.key.a3994056: {Name:mkf80f3dd21ba01f9c84639ae10ea1ab6e066f71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:27.997481 2106560 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.crt.a3994056 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.crt
	I1216 04:18:27.997569 2106560 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.key.a3994056 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.key
	I1216 04:18:27.997624 2106560 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/proxy-client.key
	I1216 04:18:27.997642 2106560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/proxy-client.crt with IP's: []
	I1216 04:18:28.539882 2106560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/proxy-client.crt ...
	I1216 04:18:28.539917 2106560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/proxy-client.crt: {Name:mkf523f406a5f1f824b54aba4988be2269117be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:28.540149 2106560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/proxy-client.key ...
	I1216 04:18:28.540164 2106560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/proxy-client.key: {Name:mk3b470c01b0d61a7e17e926058e4888a4b97a88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:28.540363 2106560 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:18:28.540413 2106560 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:18:28.540421 2106560 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:18:28.540454 2106560 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:18:28.540486 2106560 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:18:28.540526 2106560 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:18:28.540580 2106560 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:18:28.541142 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:18:28.559820 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:18:28.580344 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:18:28.599580 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:18:28.617418 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1216 04:18:28.635164 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:18:28.653498 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:18:28.670839 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:18:28.688531 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:18:28.706841 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:18:28.724744 2106560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:18:28.742614 2106560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:18:28.755537 2106560 ssh_runner.go:195] Run: openssl version
	I1216 04:18:28.761941 2106560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:18:28.769181 2106560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:18:28.776606 2106560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:18:28.780466 2106560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:18:28.780545 2106560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:18:28.821261 2106560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:18:28.828751 2106560 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:18:28.836035 2106560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:18:28.843451 2106560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:18:28.851147 2106560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:18:28.855031 2106560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:18:28.855213 2106560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:18:28.896204 2106560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:18:28.903591 2106560 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:18:28.911026 2106560 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:18:28.918370 2106560 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:18:28.925522 2106560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:18:28.929261 2106560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:18:28.929350 2106560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:18:28.971441 2106560 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:18:28.979064 2106560 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 04:18:28.986564 2106560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:18:28.990939 2106560 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:18:28.991121 2106560 kubeadm.go:401] StartCluster: {Name:auto-167684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-167684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:18:28.991222 2106560 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:18:28.991300 2106560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:18:29.021113 2106560 cri.go:89] found id: ""
	I1216 04:18:29.021190 2106560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:18:29.029533 2106560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:18:29.038024 2106560 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:18:29.038123 2106560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:18:29.046999 2106560 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:18:29.047022 2106560 kubeadm.go:158] found existing configuration files:
	
	I1216 04:18:29.047098 2106560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:18:29.055230 2106560 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:18:29.055333 2106560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:18:29.062961 2106560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:18:29.071127 2106560 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:18:29.071248 2106560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:18:29.078796 2106560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:18:29.086416 2106560 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:18:29.086522 2106560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:18:29.093993 2106560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:18:29.102065 2106560 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:18:29.102225 2106560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:18:29.109561 2106560 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:18:29.186702 2106560 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 04:18:29.187031 2106560 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:18:29.273880 2106560 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:18:45.998472 2106560 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 04:18:45.998530 2106560 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:18:45.998651 2106560 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:18:45.998727 2106560 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:18:45.998770 2106560 kubeadm.go:319] OS: Linux
	I1216 04:18:45.998831 2106560 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:18:45.998896 2106560 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:18:45.998961 2106560 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:18:45.999032 2106560 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:18:45.999131 2106560 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:18:45.999222 2106560 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:18:45.999273 2106560 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:18:45.999336 2106560 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:18:45.999408 2106560 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:18:45.999489 2106560 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:18:45.999599 2106560 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:18:45.999708 2106560 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:18:45.999775 2106560 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:18:46.003735 2106560 out.go:252]   - Generating certificates and keys ...
	I1216 04:18:46.003853 2106560 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:18:46.003923 2106560 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:18:46.003997 2106560 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:18:46.004058 2106560 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:18:46.004123 2106560 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:18:46.004176 2106560 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:18:46.004234 2106560 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:18:46.004357 2106560 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-167684 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:18:46.004409 2106560 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:18:46.004538 2106560 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-167684 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:18:46.004606 2106560 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:18:46.004669 2106560 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:18:46.004713 2106560 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:18:46.004769 2106560 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:18:46.004820 2106560 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:18:46.004876 2106560 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:18:46.004929 2106560 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:18:46.004993 2106560 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:18:46.005047 2106560 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:18:46.005136 2106560 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:18:46.005202 2106560 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:18:46.008351 2106560 out.go:252]   - Booting up control plane ...
	I1216 04:18:46.008540 2106560 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:18:46.008642 2106560 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:18:46.008743 2106560 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:18:46.008877 2106560 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:18:46.008977 2106560 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:18:46.009105 2106560 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:18:46.009233 2106560 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:18:46.009284 2106560 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:18:46.009435 2106560 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:18:46.009564 2106560 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:18:46.009655 2106560 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000694301s
	I1216 04:18:46.009812 2106560 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 04:18:46.009910 2106560 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1216 04:18:46.010008 2106560 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 04:18:46.010093 2106560 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 04:18:46.010176 2106560 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.333564249s
	I1216 04:18:46.010250 2106560 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.559283812s
	I1216 04:18:46.010322 2106560 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501322024s
	I1216 04:18:46.010452 2106560 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 04:18:46.010597 2106560 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 04:18:46.010669 2106560 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 04:18:46.010947 2106560 kubeadm.go:319] [mark-control-plane] Marking the node auto-167684 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 04:18:46.011016 2106560 kubeadm.go:319] [bootstrap-token] Using token: lv8n1y.m9myccf5lcw4p1e0
	I1216 04:18:46.014037 2106560 out.go:252]   - Configuring RBAC rules ...
	I1216 04:18:46.014173 2106560 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 04:18:46.014264 2106560 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 04:18:46.014450 2106560 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 04:18:46.014588 2106560 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 04:18:46.014716 2106560 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 04:18:46.014814 2106560 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 04:18:46.014950 2106560 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 04:18:46.014997 2106560 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 04:18:46.015227 2106560 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 04:18:46.015243 2106560 kubeadm.go:319] 
	I1216 04:18:46.015363 2106560 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 04:18:46.015377 2106560 kubeadm.go:319] 
	I1216 04:18:46.015468 2106560 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 04:18:46.015477 2106560 kubeadm.go:319] 
	I1216 04:18:46.015503 2106560 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 04:18:46.015565 2106560 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 04:18:46.015620 2106560 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 04:18:46.015628 2106560 kubeadm.go:319] 
	I1216 04:18:46.015682 2106560 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 04:18:46.015690 2106560 kubeadm.go:319] 
	I1216 04:18:46.015742 2106560 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 04:18:46.015749 2106560 kubeadm.go:319] 
	I1216 04:18:46.015802 2106560 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 04:18:46.015879 2106560 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 04:18:46.015951 2106560 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 04:18:46.015958 2106560 kubeadm.go:319] 
	I1216 04:18:46.016043 2106560 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 04:18:46.016123 2106560 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 04:18:46.016131 2106560 kubeadm.go:319] 
	I1216 04:18:46.016215 2106560 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lv8n1y.m9myccf5lcw4p1e0 \
	I1216 04:18:46.016323 2106560 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5ff7898403cb7d1c6cd652105c589f920cbc34cc5b43666798ad823c7f84bffc \
	I1216 04:18:46.016356 2106560 kubeadm.go:319] 	--control-plane 
	I1216 04:18:46.016363 2106560 kubeadm.go:319] 
	I1216 04:18:46.016449 2106560 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 04:18:46.016456 2106560 kubeadm.go:319] 
	I1216 04:18:46.016558 2106560 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lv8n1y.m9myccf5lcw4p1e0 \
	I1216 04:18:46.016681 2106560 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5ff7898403cb7d1c6cd652105c589f920cbc34cc5b43666798ad823c7f84bffc 
	I1216 04:18:46.016695 2106560 cni.go:84] Creating CNI manager for ""
	I1216 04:18:46.016702 2106560 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:18:46.019929 2106560 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 04:18:46.022844 2106560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 04:18:46.026934 2106560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 04:18:46.026955 2106560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 04:18:46.040328 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 04:18:46.380749 2106560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 04:18:46.380889 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:46.380971 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-167684 minikube.k8s.io/updated_at=2025_12_16T04_18_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=auto-167684 minikube.k8s.io/primary=true
	I1216 04:18:46.516360 2106560 ops.go:34] apiserver oom_adj: -16
	I1216 04:18:46.516378 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:47.017098 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:47.516476 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:48.016514 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:48.517090 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:49.016751 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:49.516895 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:50.016704 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:50.517234 2106560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:18:50.655271 2106560 kubeadm.go:1114] duration metric: took 4.274429933s to wait for elevateKubeSystemPrivileges
	I1216 04:18:50.655299 2106560 kubeadm.go:403] duration metric: took 21.664182436s to StartCluster
	I1216 04:18:50.655316 2106560 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:50.655384 2106560 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:18:50.656372 2106560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:18:50.656600 2106560 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:18:50.656717 2106560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 04:18:50.656958 2106560 config.go:182] Loaded profile config "auto-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 04:18:50.656987 2106560 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:18:50.657115 2106560 addons.go:70] Setting storage-provisioner=true in profile "auto-167684"
	I1216 04:18:50.657130 2106560 addons.go:239] Setting addon storage-provisioner=true in "auto-167684"
	I1216 04:18:50.657133 2106560 addons.go:70] Setting default-storageclass=true in profile "auto-167684"
	I1216 04:18:50.657148 2106560 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-167684"
	I1216 04:18:50.657154 2106560 host.go:66] Checking if "auto-167684" exists ...
	I1216 04:18:50.657494 2106560 cli_runner.go:164] Run: docker container inspect auto-167684 --format={{.State.Status}}
	I1216 04:18:50.657653 2106560 cli_runner.go:164] Run: docker container inspect auto-167684 --format={{.State.Status}}
	I1216 04:18:50.660563 2106560 out.go:179] * Verifying Kubernetes components...
	I1216 04:18:50.664543 2106560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:18:50.697557 2106560 addons.go:239] Setting addon default-storageclass=true in "auto-167684"
	I1216 04:18:50.697599 2106560 host.go:66] Checking if "auto-167684" exists ...
	I1216 04:18:50.698023 2106560 cli_runner.go:164] Run: docker container inspect auto-167684 --format={{.State.Status}}
	I1216 04:18:50.717273 2106560 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:18:50.722548 2106560 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:18:50.722577 2106560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:18:50.722646 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:50.749585 2106560 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:18:50.749607 2106560 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:18:50.749667 2106560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-167684
	I1216 04:18:50.775787 2106560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34674 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa Username:docker}
	I1216 04:18:50.785561 2106560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34674 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/auto-167684/id_rsa Username:docker}
	I1216 04:18:51.037041 2106560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:18:51.147844 2106560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:18:51.325649 2106560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 04:18:51.325763 2106560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:18:51.966312 2106560 node_ready.go:35] waiting up to 15m0s for node "auto-167684" to be "Ready" ...
	I1216 04:18:51.966547 2106560 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1216 04:18:52.036837 2106560 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 04:18:52.039684 2106560 addons.go:530] duration metric: took 1.382700689s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 04:18:52.472090 2106560 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-167684" context rescaled to 1 replicas
	W1216 04:18:53.970661 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:18:56.469565 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:18:58.470505 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:00.970710 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:03.474663 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:05.969908 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:07.970773 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:10.470162 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:12.470324 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:14.969951 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:17.470177 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:19.971193 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:22.470465 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:24.970687 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:27.469944 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	W1216 04:19:29.969886 2106560 node_ready.go:57] node "auto-167684" has "Ready":"False" status (will retry)
	I1216 04:19:31.969777 2106560 node_ready.go:49] node "auto-167684" is "Ready"
	I1216 04:19:31.969860 2106560 node_ready.go:38] duration metric: took 40.002899995s for node "auto-167684" to be "Ready" ...
	I1216 04:19:31.969888 2106560 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:19:31.969966 2106560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:19:31.985740 2106560 api_server.go:72] duration metric: took 41.329104566s to wait for apiserver process to appear ...
	I1216 04:19:31.985764 2106560 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:19:31.985784 2106560 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 04:19:31.994044 2106560 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 04:19:31.995553 2106560 api_server.go:141] control plane version: v1.34.2
	I1216 04:19:31.995586 2106560 api_server.go:131] duration metric: took 9.815167ms to wait for apiserver health ...
	I1216 04:19:31.995596 2106560 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 04:19:32.011553 2106560 system_pods.go:59] 8 kube-system pods found
	I1216 04:19:32.011600 2106560 system_pods.go:61] "coredns-66bc5c9577-vp9xx" [e0b6f061-85c9-4f9a-aaf1-45abc31c4535] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:19:32.011608 2106560 system_pods.go:61] "etcd-auto-167684" [af5f3afc-63da-4e66-9f41-eab46c1761f9] Running
	I1216 04:19:32.011614 2106560 system_pods.go:61] "kindnet-w4mmf" [f102cde5-3299-43d2-beb3-26f04c9b6f9a] Running
	I1216 04:19:32.011618 2106560 system_pods.go:61] "kube-apiserver-auto-167684" [5657e3b3-b93d-4e76-8106-e4da97c2f3e5] Running
	I1216 04:19:32.011621 2106560 system_pods.go:61] "kube-controller-manager-auto-167684" [b700710a-6d18-4922-b0ba-b00888375707] Running
	I1216 04:19:32.011625 2106560 system_pods.go:61] "kube-proxy-d7lbv" [1158f9a3-c98e-457a-8163-98bbeec2d4ef] Running
	I1216 04:19:32.011631 2106560 system_pods.go:61] "kube-scheduler-auto-167684" [58ed8333-a587-423a-91d4-836aa0a7c3f0] Running
	I1216 04:19:32.011636 2106560 system_pods.go:61] "storage-provisioner" [99f7c8d7-b1db-4fd1-ac8c-4e3a6a7820f9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:19:32.011652 2106560 system_pods.go:74] duration metric: took 16.04027ms to wait for pod list to return data ...
	I1216 04:19:32.011669 2106560 default_sa.go:34] waiting for default service account to be created ...
	I1216 04:19:32.014934 2106560 default_sa.go:45] found service account: "default"
	I1216 04:19:32.015006 2106560 default_sa.go:55] duration metric: took 3.329621ms for default service account to be created ...
	I1216 04:19:32.015022 2106560 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 04:19:32.018210 2106560 system_pods.go:86] 8 kube-system pods found
	I1216 04:19:32.018247 2106560 system_pods.go:89] "coredns-66bc5c9577-vp9xx" [e0b6f061-85c9-4f9a-aaf1-45abc31c4535] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:19:32.018255 2106560 system_pods.go:89] "etcd-auto-167684" [af5f3afc-63da-4e66-9f41-eab46c1761f9] Running
	I1216 04:19:32.018262 2106560 system_pods.go:89] "kindnet-w4mmf" [f102cde5-3299-43d2-beb3-26f04c9b6f9a] Running
	I1216 04:19:32.018266 2106560 system_pods.go:89] "kube-apiserver-auto-167684" [5657e3b3-b93d-4e76-8106-e4da97c2f3e5] Running
	I1216 04:19:32.018271 2106560 system_pods.go:89] "kube-controller-manager-auto-167684" [b700710a-6d18-4922-b0ba-b00888375707] Running
	I1216 04:19:32.018276 2106560 system_pods.go:89] "kube-proxy-d7lbv" [1158f9a3-c98e-457a-8163-98bbeec2d4ef] Running
	I1216 04:19:32.018281 2106560 system_pods.go:89] "kube-scheduler-auto-167684" [58ed8333-a587-423a-91d4-836aa0a7c3f0] Running
	I1216 04:19:32.018287 2106560 system_pods.go:89] "storage-provisioner" [99f7c8d7-b1db-4fd1-ac8c-4e3a6a7820f9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:19:32.018307 2106560 retry.go:31] will retry after 192.35627ms: missing components: kube-dns
	I1216 04:19:32.216060 2106560 system_pods.go:86] 8 kube-system pods found
	I1216 04:19:32.216100 2106560 system_pods.go:89] "coredns-66bc5c9577-vp9xx" [e0b6f061-85c9-4f9a-aaf1-45abc31c4535] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:19:32.216116 2106560 system_pods.go:89] "etcd-auto-167684" [af5f3afc-63da-4e66-9f41-eab46c1761f9] Running
	I1216 04:19:32.216122 2106560 system_pods.go:89] "kindnet-w4mmf" [f102cde5-3299-43d2-beb3-26f04c9b6f9a] Running
	I1216 04:19:32.216127 2106560 system_pods.go:89] "kube-apiserver-auto-167684" [5657e3b3-b93d-4e76-8106-e4da97c2f3e5] Running
	I1216 04:19:32.216131 2106560 system_pods.go:89] "kube-controller-manager-auto-167684" [b700710a-6d18-4922-b0ba-b00888375707] Running
	I1216 04:19:32.216137 2106560 system_pods.go:89] "kube-proxy-d7lbv" [1158f9a3-c98e-457a-8163-98bbeec2d4ef] Running
	I1216 04:19:32.216141 2106560 system_pods.go:89] "kube-scheduler-auto-167684" [58ed8333-a587-423a-91d4-836aa0a7c3f0] Running
	I1216 04:19:32.216151 2106560 system_pods.go:89] "storage-provisioner" [99f7c8d7-b1db-4fd1-ac8c-4e3a6a7820f9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:19:32.216167 2106560 retry.go:31] will retry after 258.594532ms: missing components: kube-dns
	I1216 04:19:32.479411 2106560 system_pods.go:86] 8 kube-system pods found
	I1216 04:19:32.479456 2106560 system_pods.go:89] "coredns-66bc5c9577-vp9xx" [e0b6f061-85c9-4f9a-aaf1-45abc31c4535] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:19:32.479463 2106560 system_pods.go:89] "etcd-auto-167684" [af5f3afc-63da-4e66-9f41-eab46c1761f9] Running
	I1216 04:19:32.479470 2106560 system_pods.go:89] "kindnet-w4mmf" [f102cde5-3299-43d2-beb3-26f04c9b6f9a] Running
	I1216 04:19:32.479474 2106560 system_pods.go:89] "kube-apiserver-auto-167684" [5657e3b3-b93d-4e76-8106-e4da97c2f3e5] Running
	I1216 04:19:32.479479 2106560 system_pods.go:89] "kube-controller-manager-auto-167684" [b700710a-6d18-4922-b0ba-b00888375707] Running
	I1216 04:19:32.479485 2106560 system_pods.go:89] "kube-proxy-d7lbv" [1158f9a3-c98e-457a-8163-98bbeec2d4ef] Running
	I1216 04:19:32.479489 2106560 system_pods.go:89] "kube-scheduler-auto-167684" [58ed8333-a587-423a-91d4-836aa0a7c3f0] Running
	I1216 04:19:32.479506 2106560 system_pods.go:89] "storage-provisioner" [99f7c8d7-b1db-4fd1-ac8c-4e3a6a7820f9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:19:32.479537 2106560 retry.go:31] will retry after 443.698797ms: missing components: kube-dns
	I1216 04:19:32.927556 2106560 system_pods.go:86] 8 kube-system pods found
	I1216 04:19:32.927633 2106560 system_pods.go:89] "coredns-66bc5c9577-vp9xx" [e0b6f061-85c9-4f9a-aaf1-45abc31c4535] Running
	I1216 04:19:32.927647 2106560 system_pods.go:89] "etcd-auto-167684" [af5f3afc-63da-4e66-9f41-eab46c1761f9] Running
	I1216 04:19:32.927652 2106560 system_pods.go:89] "kindnet-w4mmf" [f102cde5-3299-43d2-beb3-26f04c9b6f9a] Running
	I1216 04:19:32.927657 2106560 system_pods.go:89] "kube-apiserver-auto-167684" [5657e3b3-b93d-4e76-8106-e4da97c2f3e5] Running
	I1216 04:19:32.927661 2106560 system_pods.go:89] "kube-controller-manager-auto-167684" [b700710a-6d18-4922-b0ba-b00888375707] Running
	I1216 04:19:32.927667 2106560 system_pods.go:89] "kube-proxy-d7lbv" [1158f9a3-c98e-457a-8163-98bbeec2d4ef] Running
	I1216 04:19:32.927670 2106560 system_pods.go:89] "kube-scheduler-auto-167684" [58ed8333-a587-423a-91d4-836aa0a7c3f0] Running
	I1216 04:19:32.927674 2106560 system_pods.go:89] "storage-provisioner" [99f7c8d7-b1db-4fd1-ac8c-4e3a6a7820f9] Running
	I1216 04:19:32.927681 2106560 system_pods.go:126] duration metric: took 912.652367ms to wait for k8s-apps to be running ...
	I1216 04:19:32.927693 2106560 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 04:19:32.927753 2106560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:19:32.941265 2106560 system_svc.go:56] duration metric: took 13.562444ms WaitForService to wait for kubelet
	I1216 04:19:32.941297 2106560 kubeadm.go:587] duration metric: took 42.284664792s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:19:32.941318 2106560 node_conditions.go:102] verifying NodePressure condition ...
	I1216 04:19:32.944774 2106560 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 04:19:32.944820 2106560 node_conditions.go:123] node cpu capacity is 2
	I1216 04:19:32.944840 2106560 node_conditions.go:105] duration metric: took 3.517392ms to run NodePressure ...
	I1216 04:19:32.944854 2106560 start.go:242] waiting for startup goroutines ...
	I1216 04:19:32.944862 2106560 start.go:247] waiting for cluster config update ...
	I1216 04:19:32.944892 2106560 start.go:256] writing updated cluster config ...
	I1216 04:19:32.945193 2106560 ssh_runner.go:195] Run: rm -f paused
	I1216 04:19:32.948754 2106560 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:19:32.952377 2106560 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vp9xx" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:32.957465 2106560 pod_ready.go:94] pod "coredns-66bc5c9577-vp9xx" is "Ready"
	I1216 04:19:32.957495 2106560 pod_ready.go:86] duration metric: took 5.090771ms for pod "coredns-66bc5c9577-vp9xx" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:32.960116 2106560 pod_ready.go:83] waiting for pod "etcd-auto-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:32.965072 2106560 pod_ready.go:94] pod "etcd-auto-167684" is "Ready"
	I1216 04:19:32.965100 2106560 pod_ready.go:86] duration metric: took 4.955939ms for pod "etcd-auto-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:32.967733 2106560 pod_ready.go:83] waiting for pod "kube-apiserver-auto-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:32.972577 2106560 pod_ready.go:94] pod "kube-apiserver-auto-167684" is "Ready"
	I1216 04:19:32.972604 2106560 pod_ready.go:86] duration metric: took 4.846132ms for pod "kube-apiserver-auto-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:32.975153 2106560 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:33.352714 2106560 pod_ready.go:94] pod "kube-controller-manager-auto-167684" is "Ready"
	I1216 04:19:33.352744 2106560 pod_ready.go:86] duration metric: took 377.567647ms for pod "kube-controller-manager-auto-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:33.553169 2106560 pod_ready.go:83] waiting for pod "kube-proxy-d7lbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:33.952809 2106560 pod_ready.go:94] pod "kube-proxy-d7lbv" is "Ready"
	I1216 04:19:33.952839 2106560 pod_ready.go:86] duration metric: took 399.642534ms for pod "kube-proxy-d7lbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:34.153861 2106560 pod_ready.go:83] waiting for pod "kube-scheduler-auto-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:34.553317 2106560 pod_ready.go:94] pod "kube-scheduler-auto-167684" is "Ready"
	I1216 04:19:34.553347 2106560 pod_ready.go:86] duration metric: took 399.458226ms for pod "kube-scheduler-auto-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:19:34.553361 2106560 pod_ready.go:40] duration metric: took 1.604572928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:19:34.604000 2106560 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 04:19:34.607145 2106560 out.go:179] * Done! kubectl is now configured to use "auto-167684" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207708993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207730006Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207767502Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207785201Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207802406Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207818545Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207829860Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207852456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207871098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207912377Z" level=info msg="Connect containerd service"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.208248542Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.209178851Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.222779090Z" level=info msg="Start subscribing containerd event"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223109158Z" level=info msg="Start recovering state"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223239174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223305749Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252281513Z" level=info msg="Start event monitor"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252522812Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252604139Z" level=info msg="Start streaming server"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252676581Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252754257Z" level=info msg="runtime interface starting up..."
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252816426Z" level=info msg="starting plugins..."
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.253604035Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.254042786Z" level=info msg="containerd successfully booted in 0.075355s"
	Dec 16 04:04:42 no-preload-255023 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:19:47.755434    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:19:47.756381    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:19:47.758194    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:19:47.758902    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:19:47.760885    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:19:47 up 10:02,  0 user,  load average: 0.74, 0.69, 1.05
	Linux no-preload-255023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:19:44 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:19:45 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 16 04:19:45 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:19:45 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:19:45 no-preload-255023 kubelet[7981]: E1216 04:19:45.495109    7981 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:19:45 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:19:45 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:19:46 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 16 04:19:46 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:19:46 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:19:46 no-preload-255023 kubelet[7987]: E1216 04:19:46.231559    7987 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:19:46 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:19:46 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:19:46 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 16 04:19:46 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:19:46 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:19:46 no-preload-255023 kubelet[8014]: E1216 04:19:46.985727    8014 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:19:46 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:19:46 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:19:47 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 16 04:19:47 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:19:47 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:19:47 no-preload-255023 kubelet[8116]: E1216 04:19:47.754523    8116 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:19:47 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:19:47 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 2 (344.452134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (374.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m9.903117232s)

                                                
                                                
-- stdout --
	* [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:11:49.443609 2088124 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:11:49.443766 2088124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:49.443791 2088124 out.go:374] Setting ErrFile to fd 2...
	I1216 04:11:49.443797 2088124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:49.444086 2088124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:11:49.444552 2088124 out.go:368] Setting JSON to false
	I1216 04:11:49.445491 2088124 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35654,"bootTime":1765822656,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:11:49.445560 2088124 start.go:143] virtualization:  
	I1216 04:11:49.450767 2088124 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:11:49.453684 2088124 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:11:49.453830 2088124 notify.go:221] Checking for updates...
	I1216 04:11:49.459490 2088124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:11:49.462425 2088124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:49.465199 2088124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:11:49.468049 2088124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:11:49.470926 2088124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:11:49.474323 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:49.474898 2088124 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:11:49.507547 2088124 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:11:49.507675 2088124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:49.559588 2088124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:11:49.550344871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:49.559694 2088124 docker.go:319] overlay module found
	I1216 04:11:49.564661 2088124 out.go:179] * Using the docker driver based on existing profile
	I1216 04:11:49.567577 2088124 start.go:309] selected driver: docker
	I1216 04:11:49.567592 2088124 start.go:927] validating driver "docker" against &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:49.567688 2088124 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:11:49.568412 2088124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:49.630893 2088124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:11:49.62154899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:49.631269 2088124 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:11:49.631299 2088124 cni.go:84] Creating CNI manager for ""
	I1216 04:11:49.631354 2088124 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:11:49.631398 2088124 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:49.634471 2088124 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:11:49.637273 2088124 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:11:49.640282 2088124 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:11:49.643072 2088124 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:11:49.643109 2088124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:11:49.643124 2088124 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:11:49.643134 2088124 cache.go:65] Caching tarball of preloaded images
	I1216 04:11:49.643213 2088124 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:11:49.643223 2088124 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:11:49.643349 2088124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:11:49.663232 2088124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:11:49.663256 2088124 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:11:49.663277 2088124 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:11:49.663307 2088124 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:11:49.663368 2088124 start.go:364] duration metric: took 37.825µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:11:49.663390 2088124 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:11:49.663398 2088124 fix.go:54] fixHost starting: 
	I1216 04:11:49.663657 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:49.680807 2088124 fix.go:112] recreateIfNeeded on newest-cni-450938: state=Stopped err=<nil>
	W1216 04:11:49.680842 2088124 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:11:49.684150 2088124 out.go:252] * Restarting existing docker container for "newest-cni-450938" ...
	I1216 04:11:49.684240 2088124 cli_runner.go:164] Run: docker start newest-cni-450938
	I1216 04:11:49.955342 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:49.981840 2088124 kic.go:430] container "newest-cni-450938" state is running.
	I1216 04:11:49.982211 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:50.021278 2088124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:11:50.021527 2088124 machine.go:94] provisionDockerMachine start ...
	I1216 04:11:50.021596 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:50.049595 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:50.050060 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:50.050075 2088124 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:11:50.050748 2088124 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:11:53.188290 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:11:53.188358 2088124 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:11:53.188485 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:53.208640 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:53.208973 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:53.208992 2088124 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:11:53.354850 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:11:53.354932 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:53.373349 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:53.373653 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:53.373677 2088124 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:11:53.507317 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:11:53.507346 2088124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:11:53.507369 2088124 ubuntu.go:190] setting up certificates
	I1216 04:11:53.507379 2088124 provision.go:84] configureAuth start
	I1216 04:11:53.507463 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:53.525162 2088124 provision.go:143] copyHostCerts
	I1216 04:11:53.525241 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:11:53.525251 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:11:53.525327 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:11:53.525423 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:11:53.525428 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:11:53.525453 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:11:53.525509 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:11:53.525514 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:11:53.525536 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:11:53.525580 2088124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:11:54.045695 2088124 provision.go:177] copyRemoteCerts
	I1216 04:11:54.045768 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:11:54.045810 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.066867 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.167270 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:11:54.185959 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:11:54.204990 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:11:54.223347 2088124 provision.go:87] duration metric: took 715.940901ms to configureAuth
	I1216 04:11:54.223373 2088124 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:11:54.223571 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:54.223579 2088124 machine.go:97] duration metric: took 4.202043696s to provisionDockerMachine
	I1216 04:11:54.223586 2088124 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:11:54.223597 2088124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:11:54.223657 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:11:54.223694 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.241386 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.339071 2088124 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:11:54.342372 2088124 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:11:54.342404 2088124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:11:54.342417 2088124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:11:54.342476 2088124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:11:54.342569 2088124 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:11:54.342679 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:11:54.350184 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:11:54.367994 2088124 start.go:296] duration metric: took 144.392831ms for postStartSetup
	I1216 04:11:54.368092 2088124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:11:54.368136 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.385560 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.484799 2088124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:11:54.491513 2088124 fix.go:56] duration metric: took 4.828106411s for fixHost
	I1216 04:11:54.491541 2088124 start.go:83] releasing machines lock for "newest-cni-450938", held for 4.82816163s
	I1216 04:11:54.491612 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:54.509094 2088124 ssh_runner.go:195] Run: cat /version.json
	I1216 04:11:54.509138 2088124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:11:54.509150 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.509206 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.527383 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.529259 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.622646 2088124 ssh_runner.go:195] Run: systemctl --version
	I1216 04:11:54.714029 2088124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:11:54.718486 2088124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:11:54.718568 2088124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:11:54.726541 2088124 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:11:54.726568 2088124 start.go:496] detecting cgroup driver to use...
	I1216 04:11:54.726632 2088124 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:11:54.726714 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:11:54.745031 2088124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:11:54.758297 2088124 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:11:54.758370 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:11:54.774348 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:11:54.787565 2088124 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:11:54.906330 2088124 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:11:55.031458 2088124 docker.go:234] disabling docker service ...
	I1216 04:11:55.031602 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:11:55.047495 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:11:55.061071 2088124 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:11:55.176474 2088124 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:11:55.308037 2088124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:11:55.321108 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:11:55.335545 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:11:55.344904 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:11:55.354341 2088124 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:11:55.354432 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:11:55.364241 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:11:55.373363 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:11:55.382311 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:11:55.391427 2088124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:11:55.399573 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:11:55.408617 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:11:55.417842 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:11:55.427155 2088124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:11:55.435028 2088124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:11:55.442465 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:55.555794 2088124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:11:55.675355 2088124 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:11:55.675506 2088124 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:11:55.679491 2088124 start.go:564] Will wait 60s for crictl version
	I1216 04:11:55.679606 2088124 ssh_runner.go:195] Run: which crictl
	I1216 04:11:55.683263 2088124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:11:55.706762 2088124 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:11:55.706911 2088124 ssh_runner.go:195] Run: containerd --version
	I1216 04:11:55.726295 2088124 ssh_runner.go:195] Run: containerd --version
	I1216 04:11:55.754045 2088124 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:11:55.757209 2088124 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:11:55.773141 2088124 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:11:55.777028 2088124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:55.790127 2088124 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:11:55.792976 2088124 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:11:55.793134 2088124 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:11:55.793224 2088124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:55.820865 2088124 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:11:55.820893 2088124 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:11:55.820953 2088124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:55.848708 2088124 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:11:55.848733 2088124 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:11:55.848741 2088124 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:11:55.848865 2088124 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:11:55.848944 2088124 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:11:55.877782 2088124 cni.go:84] Creating CNI manager for ""
	I1216 04:11:55.877809 2088124 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:11:55.877833 2088124 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:11:55.877856 2088124 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:11:55.877980 2088124 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:11:55.878053 2088124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:11:55.886063 2088124 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:11:55.886135 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:11:55.893994 2088124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:11:55.906976 2088124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:11:55.921636 2088124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:11:55.935475 2088124 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:11:55.940181 2088124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:55.958241 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:56.086097 2088124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:56.102803 2088124 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:11:56.102828 2088124 certs.go:195] generating shared ca certs ...
	I1216 04:11:56.102856 2088124 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.103007 2088124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:11:56.103163 2088124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:11:56.103175 2088124 certs.go:257] generating profile certs ...
	I1216 04:11:56.103292 2088124 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:11:56.103376 2088124 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:11:56.103427 2088124 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:11:56.103545 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:11:56.103587 2088124 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:11:56.103600 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:11:56.103627 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:11:56.103658 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:11:56.103686 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:11:56.103735 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:11:56.104338 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:11:56.126254 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:11:56.147493 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:11:56.167667 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:11:56.186450 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:11:56.204453 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:11:56.222875 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:11:56.240385 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:11:56.257955 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:11:56.276171 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:11:56.293848 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:11:56.311719 2088124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:11:56.324807 2088124 ssh_runner.go:195] Run: openssl version
	I1216 04:11:56.331262 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.338764 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:11:56.346054 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.349987 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.350052 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.391179 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:11:56.398825 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.406218 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:11:56.413696 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.417638 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.417705 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.459490 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:11:56.466920 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.474252 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:11:56.481440 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.485119 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.485259 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.526344 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:11:56.533907 2088124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:11:56.537774 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:11:56.578487 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:11:56.619729 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:11:56.660999 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:11:56.702232 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:11:56.744306 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:11:56.785680 2088124 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:56.785803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:11:56.785870 2088124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:11:56.816785 2088124 cri.go:89] found id: ""
	I1216 04:11:56.816890 2088124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:11:56.824683 2088124 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:11:56.824744 2088124 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:11:56.824813 2088124 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:11:56.832253 2088124 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:11:56.832838 2088124 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-450938" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:56.833086 2088124 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-450938" cluster setting kubeconfig missing "newest-cni-450938" context setting]
	I1216 04:11:56.833830 2088124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.835841 2088124 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:11:56.846568 2088124 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 04:11:56.846607 2088124 kubeadm.go:602] duration metric: took 21.839206ms to restartPrimaryControlPlane
	I1216 04:11:56.846659 2088124 kubeadm.go:403] duration metric: took 60.947212ms to StartCluster
	I1216 04:11:56.846683 2088124 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.846774 2088124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:56.847954 2088124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.848288 2088124 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:11:56.848543 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:56.848590 2088124 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:11:56.848653 2088124 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-450938"
	I1216 04:11:56.848667 2088124 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-450938"
	I1216 04:11:56.848690 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.849140 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.849662 2088124 addons.go:70] Setting dashboard=true in profile "newest-cni-450938"
	I1216 04:11:56.849685 2088124 addons.go:239] Setting addon dashboard=true in "newest-cni-450938"
	W1216 04:11:56.849692 2088124 addons.go:248] addon dashboard should already be in state true
	I1216 04:11:56.849725 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.850155 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.851797 2088124 addons.go:70] Setting default-storageclass=true in profile "newest-cni-450938"
	I1216 04:11:56.851835 2088124 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-450938"
	I1216 04:11:56.852230 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.854311 2088124 out.go:179] * Verifying Kubernetes components...
	I1216 04:11:56.857550 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:56.877736 2088124 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 04:11:56.883198 2088124 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 04:11:56.888994 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 04:11:56.889023 2088124 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 04:11:56.889099 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.905463 2088124 addons.go:239] Setting addon default-storageclass=true in "newest-cni-450938"
	I1216 04:11:56.905510 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.905917 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.906132 2088124 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:11:56.909026 2088124 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:56.909049 2088124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:11:56.909124 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.939233 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:56.960779 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:56.969260 2088124 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:56.969285 2088124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:11:56.969344 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.994990 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:57.096083 2088124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:57.153660 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:57.154691 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 04:11:57.154741 2088124 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 04:11:57.179948 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:57.181646 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 04:11:57.181698 2088124 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 04:11:57.220157 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 04:11:57.220192 2088124 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 04:11:57.270420 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 04:11:57.270450 2088124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 04:11:57.289844 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 04:11:57.289925 2088124 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 04:11:57.304564 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 04:11:57.304589 2088124 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 04:11:57.318199 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 04:11:57.318268 2088124 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 04:11:57.331721 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 04:11:57.331747 2088124 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 04:11:57.344689 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:11:57.344766 2088124 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 04:11:57.358118 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:57.937381 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937417 2088124 retry.go:31] will retry after 269.480362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:57.937480 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937486 2088124 retry.go:31] will retry after 229.28952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:57.937664 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937674 2088124 retry.go:31] will retry after 277.329171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937800 2088124 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:11:57.937903 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:58.167607 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:58.207320 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:58.215928 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:58.286306 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.286392 2088124 retry.go:31] will retry after 251.551644ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:58.336689 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.336775 2088124 retry.go:31] will retry after 297.618581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:58.344615 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.344703 2088124 retry.go:31] will retry after 371.748045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.438848 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:58.538550 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:11:58.607193 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.607227 2088124 retry.go:31] will retry after 295.364456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.635597 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:11:58.705620 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.705655 2088124 retry.go:31] will retry after 548.313742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.716963 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:58.791977 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.792012 2088124 retry.go:31] will retry after 352.878163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.903095 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:58.938720 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:11:58.980189 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.980231 2088124 retry.go:31] will retry after 538.903986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.145753 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:59.214092 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.214141 2088124 retry.go:31] will retry after 822.609154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.254394 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:11:59.315668 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.315705 2088124 retry.go:31] will retry after 808.232785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.439021 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:59.520292 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:11:59.580253 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.580290 2088124 retry.go:31] will retry after 1.339162464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.938854 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:00.037859 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:12:00.126588 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:00.271287 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.271330 2088124 retry.go:31] will retry after 1.560463337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:12:00.271395 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.271405 2088124 retry.go:31] will retry after 965.630874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.439512 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:00.919713 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:00.938198 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:01.016821 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.016853 2088124 retry.go:31] will retry after 2.723457612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.238128 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:01.299810 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.299846 2088124 retry.go:31] will retry after 1.407497229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.438022 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:01.832831 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:01.895982 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.896019 2088124 retry.go:31] will retry after 1.861173275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.938295 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:02.438804 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:02.708270 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:02.778471 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:02.778510 2088124 retry.go:31] will retry after 3.48676176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:02.938901 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:03.438053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:03.740586 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:03.758141 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:03.823512 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.823549 2088124 retry.go:31] will retry after 3.513983603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:12:03.840241 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.840277 2088124 retry.go:31] will retry after 3.549700703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.938636 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:04.438975 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:04.938051 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:05.438813 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:05.938079 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:06.265883 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:06.326297 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:06.326330 2088124 retry.go:31] will retry after 5.907729831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:06.438566 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:06.938552 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:07.337994 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:07.390520 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:07.400091 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.400119 2088124 retry.go:31] will retry after 4.07949146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.438412 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:07.458870 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.458913 2088124 retry.go:31] will retry after 5.738742007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.938058 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:08.438048 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:08.938086 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:09.438088 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:09.938071 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:10.438982 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:10.938817 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:11.438560 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:11.480608 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:11.544274 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:11.544311 2088124 retry.go:31] will retry after 7.489839912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:11.938962 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:12.234793 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:12.294760 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:12.294795 2088124 retry.go:31] will retry after 8.284230916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:12.438042 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:12.938369 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:13.198743 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:13.273972 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:13.274008 2088124 retry.go:31] will retry after 8.727161897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:13.438137 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:13.938122 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:14.438053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:14.938105 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:15.438117 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:15.938675 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:16.438275 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:16.938051 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:17.438977 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:17.938090 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:18.438139 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:18.938875 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.034608 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:19.095129 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:19.095161 2088124 retry.go:31] will retry after 13.285449955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:19.438765 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.938027 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:20.438947 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:20.579839 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:20.651187 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:20.651287 2088124 retry.go:31] will retry after 8.595963064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:20.938552 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:21.438919 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:21.938886 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:22.001902 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:22.069854 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:22.069889 2088124 retry.go:31] will retry after 9.875475964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:22.438071 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:22.938057 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:23.438759 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:23.938093 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:24.438012 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:24.938036 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:25.438056 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:25.938060 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:26.438683 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:26.938545 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:27.438839 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:27.938076 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:28.438528 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:28.938942 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:29.247522 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:29.319498 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:29.319530 2088124 retry.go:31] will retry after 11.610992075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:29.438808 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:29.938634 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:30.438853 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:30.938004 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.438055 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.939022 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.945765 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:32.028672 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.028710 2088124 retry.go:31] will retry after 8.660108846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.380884 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:32.438451 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:32.451845 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.451878 2088124 retry.go:31] will retry after 20.587741489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.939020 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:33.438637 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:33.939026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:34.438183 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:34.938889 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:35.438058 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:35.938079 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:36.438040 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:36.938449 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:37.438932 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:37.938711 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:38.438609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:38.938102 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:39.438039 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:39.938131 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:40.438724 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:40.689879 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:40.758598 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:40.758633 2088124 retry.go:31] will retry after 22.619838961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:40.931114 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:12:40.938807 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:41.022703 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:41.022737 2088124 retry.go:31] will retry after 26.329717671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:41.438070 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:41.938106 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:42.438073 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:42.938708 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:43.438842 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:43.938877 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:44.438603 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:44.938026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:45.438387 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:45.938042 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:46.438913 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:46.938061 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:47.438724 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:47.938106 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:48.438105 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:48.938608 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:49.438052 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:49.938137 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:50.438126 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:50.938158 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:51.438047 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:51.938036 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:52.437993 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:52.938585 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:53.040311 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:53.100279 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:53.100315 2088124 retry.go:31] will retry after 25.050501438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:53.438735 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:53.938047 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:54.438981 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:54.938826 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:55.438076 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:55.938982 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:56.438082 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:56.938775 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:12:56.938878 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:12:56.965240 2088124 cri.go:89] found id: ""
	I1216 04:12:56.965267 2088124 logs.go:282] 0 containers: []
	W1216 04:12:56.965275 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:12:56.965282 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:12:56.965342 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:12:56.994326 2088124 cri.go:89] found id: ""
	I1216 04:12:56.994352 2088124 logs.go:282] 0 containers: []
	W1216 04:12:56.994361 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:12:56.994368 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:12:56.994428 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:12:57.023992 2088124 cri.go:89] found id: ""
	I1216 04:12:57.024019 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.024028 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:12:57.024034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:12:57.024096 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:12:57.048533 2088124 cri.go:89] found id: ""
	I1216 04:12:57.048557 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.048564 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:12:57.048571 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:12:57.048633 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:12:57.073452 2088124 cri.go:89] found id: ""
	I1216 04:12:57.073477 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.073489 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:12:57.073495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:12:57.073556 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:12:57.098320 2088124 cri.go:89] found id: ""
	I1216 04:12:57.098343 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.098351 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:12:57.098358 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:12:57.098422 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:12:57.122156 2088124 cri.go:89] found id: ""
	I1216 04:12:57.122178 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.122186 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:12:57.122192 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:12:57.122253 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:12:57.146348 2088124 cri.go:89] found id: ""
	I1216 04:12:57.146371 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.146379 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:12:57.146389 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:12:57.146400 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:12:57.204504 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:12:57.204554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:12:57.222444 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:12:57.222477 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:12:57.295723 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:12:57.287802    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.288197    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.289767    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.290151    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.291759    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:12:57.287802    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.288197    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.289767    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.290151    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.291759    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:12:57.295745 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:12:57.295758 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:12:57.320926 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:12:57.320959 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:12:59.851668 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:59.862238 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:12:59.862307 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:12:59.886311 2088124 cri.go:89] found id: ""
	I1216 04:12:59.886338 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.886346 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:12:59.886353 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:12:59.886412 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:12:59.910403 2088124 cri.go:89] found id: ""
	I1216 04:12:59.910426 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.910434 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:12:59.910440 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:12:59.910498 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:12:59.935230 2088124 cri.go:89] found id: ""
	I1216 04:12:59.935253 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.935262 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:12:59.935268 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:12:59.935329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:12:59.958999 2088124 cri.go:89] found id: ""
	I1216 04:12:59.959022 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.959030 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:12:59.959037 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:12:59.959113 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:12:59.984633 2088124 cri.go:89] found id: ""
	I1216 04:12:59.984655 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.984663 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:12:59.984670 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:12:59.984729 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:00.052821 2088124 cri.go:89] found id: ""
	I1216 04:13:00.052848 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.052857 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:00.052865 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:00.052942 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:00.179259 2088124 cri.go:89] found id: ""
	I1216 04:13:00.179286 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.179295 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:00.179301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:00.179374 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:00.301818 2088124 cri.go:89] found id: ""
	I1216 04:13:00.301845 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.301854 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:00.301865 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:00.301877 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:00.370430 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:00.370474 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:00.387961 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:00.387994 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:00.469934 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:00.460570    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.462248    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.463914    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.464266    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.465844    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:00.460570    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.462248    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.463914    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.464266    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.465844    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:00.470008 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:00.470035 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:00.497033 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:00.497108 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:03.031116 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:03.042155 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:03.042231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:03.067263 2088124 cri.go:89] found id: ""
	I1216 04:13:03.067286 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.067294 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:03.067300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:03.067359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:03.092385 2088124 cri.go:89] found id: ""
	I1216 04:13:03.092411 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.092421 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:03.092434 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:03.092500 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:03.121839 2088124 cri.go:89] found id: ""
	I1216 04:13:03.121866 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.121874 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:03.121881 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:03.121939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:03.145563 2088124 cri.go:89] found id: ""
	I1216 04:13:03.145591 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.145600 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:03.145606 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:03.145674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:03.173280 2088124 cri.go:89] found id: ""
	I1216 04:13:03.173308 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.173317 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:03.173324 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:03.173387 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:03.198437 2088124 cri.go:89] found id: ""
	I1216 04:13:03.198464 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.198472 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:03.198479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:03.198539 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:03.223390 2088124 cri.go:89] found id: ""
	I1216 04:13:03.223417 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.223426 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:03.223433 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:03.223492 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:03.247999 2088124 cri.go:89] found id: ""
	I1216 04:13:03.248027 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.248037 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:03.248046 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:03.248058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:03.273012 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:03.273045 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:03.309023 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:03.309054 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:03.365917 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:03.365958 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:03.379538 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:13:03.383127 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:03.383196 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:03.513399 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:03.513433 2088124 retry.go:31] will retry after 36.39416212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:03.513601 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:03.473858    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.483329    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.484155    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.485970    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.486663    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:03.473858    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.483329    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.484155    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.485970    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.486663    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:06.013933 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:06.025509 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:06.025592 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:06.052215 2088124 cri.go:89] found id: ""
	I1216 04:13:06.052240 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.052251 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:06.052258 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:06.052322 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:06.079261 2088124 cri.go:89] found id: ""
	I1216 04:13:06.079294 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.079303 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:06.079309 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:06.079373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:06.105297 2088124 cri.go:89] found id: ""
	I1216 04:13:06.105320 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.105329 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:06.105335 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:06.105394 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:06.134648 2088124 cri.go:89] found id: ""
	I1216 04:13:06.134671 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.134679 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:06.134685 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:06.134753 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:06.159604 2088124 cri.go:89] found id: ""
	I1216 04:13:06.159627 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.159635 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:06.159641 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:06.159705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:06.189283 2088124 cri.go:89] found id: ""
	I1216 04:13:06.189307 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.189315 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:06.189322 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:06.189431 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:06.214435 2088124 cri.go:89] found id: ""
	I1216 04:13:06.214469 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.214479 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:06.214486 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:06.214553 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:06.240374 2088124 cri.go:89] found id: ""
	I1216 04:13:06.240399 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.240407 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:06.240417 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:06.240465 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:06.297779 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:06.297828 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:06.314788 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:06.314817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:06.383844 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:06.374547    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.375415    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377219    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377642    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.379268    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:06.374547    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.375415    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377219    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377642    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.379268    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:06.383863 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:06.383876 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:06.409175 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:06.409211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:07.353255 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:13:07.417109 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:07.417143 2088124 retry.go:31] will retry after 43.71748827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:08.979175 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:08.990018 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:08.990104 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:09.017028 2088124 cri.go:89] found id: ""
	I1216 04:13:09.017051 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.017060 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:09.017066 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:09.017126 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:09.042381 2088124 cri.go:89] found id: ""
	I1216 04:13:09.042404 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.042413 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:09.042419 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:09.042477 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:09.071646 2088124 cri.go:89] found id: ""
	I1216 04:13:09.071670 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.071679 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:09.071685 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:09.071744 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:09.100697 2088124 cri.go:89] found id: ""
	I1216 04:13:09.100722 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.100730 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:09.100737 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:09.100797 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:09.129662 2088124 cri.go:89] found id: ""
	I1216 04:13:09.129695 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.129704 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:09.129710 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:09.129780 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:09.156770 2088124 cri.go:89] found id: ""
	I1216 04:13:09.156794 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.156802 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:09.156809 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:09.156869 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:09.182436 2088124 cri.go:89] found id: ""
	I1216 04:13:09.182458 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.182466 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:09.182472 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:09.182531 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:09.206146 2088124 cri.go:89] found id: ""
	I1216 04:13:09.206170 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.206177 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:09.206186 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:09.206198 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:09.231510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:09.231544 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:09.260226 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:09.260256 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:09.316036 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:09.316074 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:09.332123 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:09.332153 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:09.399253 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:09.390164    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.390761    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392299    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392750    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.394603    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:09.390164    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.390761    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392299    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392750    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.394603    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:11.899540 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:11.910018 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:11.910090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:11.938505 2088124 cri.go:89] found id: ""
	I1216 04:13:11.938532 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.938541 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:11.938549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:11.938611 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:11.962625 2088124 cri.go:89] found id: ""
	I1216 04:13:11.962654 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.962663 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:11.962681 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:11.962753 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:11.987471 2088124 cri.go:89] found id: ""
	I1216 04:13:11.987497 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.987506 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:11.987512 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:11.987578 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:12.016864 2088124 cri.go:89] found id: ""
	I1216 04:13:12.016892 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.016900 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:12.016907 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:12.016971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:12.042061 2088124 cri.go:89] found id: ""
	I1216 04:13:12.042088 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.042096 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:12.042102 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:12.042163 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:12.071427 2088124 cri.go:89] found id: ""
	I1216 04:13:12.071455 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.071464 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:12.071471 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:12.071533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:12.096407 2088124 cri.go:89] found id: ""
	I1216 04:13:12.096454 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.096463 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:12.096470 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:12.096529 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:12.120925 2088124 cri.go:89] found id: ""
	I1216 04:13:12.120952 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.120961 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:12.120970 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:12.120981 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:12.187317 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:12.178799    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.179379    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181098    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181645    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.183376    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:12.178799    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.179379    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181098    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181645    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.183376    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:12.187390 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:12.187411 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:12.212126 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:12.212162 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:12.243105 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:12.243134 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:12.300571 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:12.300619 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:14.817445 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:14.827746 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:14.827821 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:14.857336 2088124 cri.go:89] found id: ""
	I1216 04:13:14.857363 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.857372 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:14.857379 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:14.857446 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:14.882109 2088124 cri.go:89] found id: ""
	I1216 04:13:14.882137 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.882146 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:14.882152 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:14.882211 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:14.914132 2088124 cri.go:89] found id: ""
	I1216 04:13:14.914161 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.914171 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:14.914178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:14.914239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:14.939185 2088124 cri.go:89] found id: ""
	I1216 04:13:14.939214 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.939223 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:14.939230 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:14.939297 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:14.963568 2088124 cri.go:89] found id: ""
	I1216 04:13:14.963595 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.963604 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:14.963630 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:14.963702 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:14.988853 2088124 cri.go:89] found id: ""
	I1216 04:13:14.988880 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.988889 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:14.988895 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:14.988957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:15.018658 2088124 cri.go:89] found id: ""
	I1216 04:13:15.018685 2088124 logs.go:282] 0 containers: []
	W1216 04:13:15.018694 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:15.018701 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:15.018780 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:15.052902 2088124 cri.go:89] found id: ""
	I1216 04:13:15.052926 2088124 logs.go:282] 0 containers: []
	W1216 04:13:15.052935 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:15.052945 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:15.052956 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:15.110239 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:15.110275 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:15.126429 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:15.126498 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:15.193844 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:15.184934    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.185663    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.187427    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.188101    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.189782    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:15.184934    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.185663    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.187427    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.188101    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.189782    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:15.193874 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:15.193889 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:15.219891 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:15.219925 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:17.752258 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:17.763106 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:17.763180 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:17.789059 2088124 cri.go:89] found id: ""
	I1216 04:13:17.789084 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.789093 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:17.789099 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:17.789158 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:17.817534 2088124 cri.go:89] found id: ""
	I1216 04:13:17.817560 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.817569 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:17.817576 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:17.817637 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:17.843134 2088124 cri.go:89] found id: ""
	I1216 04:13:17.843160 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.843169 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:17.843175 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:17.843240 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:17.868379 2088124 cri.go:89] found id: ""
	I1216 04:13:17.868404 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.868414 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:17.868421 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:17.868490 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:17.893356 2088124 cri.go:89] found id: ""
	I1216 04:13:17.893384 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.893393 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:17.893400 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:17.893463 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:17.921808 2088124 cri.go:89] found id: ""
	I1216 04:13:17.921851 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.921860 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:17.921867 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:17.921928 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:17.947257 2088124 cri.go:89] found id: ""
	I1216 04:13:17.947284 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.947293 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:17.947300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:17.947367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:17.975318 2088124 cri.go:89] found id: ""
	I1216 04:13:17.975345 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.975354 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:17.975364 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:17.975375 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:18.051655 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:18.042648    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.043445    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045243    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045767    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.047547    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:18.042648    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.043445    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045243    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045767    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.047547    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:18.051680 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:18.051693 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:18.078685 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:18.078723 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:18.107761 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:18.107792 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:18.151402 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:13:18.166502 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:18.166585 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 04:13:18.219917 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:18.220071 2088124 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:20.720560 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:20.734518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:20.734605 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:20.771344 2088124 cri.go:89] found id: ""
	I1216 04:13:20.771418 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.771435 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:20.771442 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:20.771517 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:20.801470 2088124 cri.go:89] found id: ""
	I1216 04:13:20.801496 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.801505 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:20.801511 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:20.801591 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:20.826547 2088124 cri.go:89] found id: ""
	I1216 04:13:20.826620 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.826644 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:20.826663 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:20.826747 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:20.852855 2088124 cri.go:89] found id: ""
	I1216 04:13:20.852881 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.852891 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:20.852898 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:20.852986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:20.878623 2088124 cri.go:89] found id: ""
	I1216 04:13:20.878659 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.878668 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:20.878692 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:20.878808 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:20.902864 2088124 cri.go:89] found id: ""
	I1216 04:13:20.902938 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.902964 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:20.902984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:20.903181 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:20.932453 2088124 cri.go:89] found id: ""
	I1216 04:13:20.932480 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.932488 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:20.932495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:20.932552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:20.961972 2088124 cri.go:89] found id: ""
	I1216 04:13:20.962003 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.962012 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:20.962021 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:20.962046 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:21.031620 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:21.021920    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.023404    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.024377    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.025233    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.026911    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:21.021920    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.023404    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.024377    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.025233    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.026911    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:21.031656 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:21.031669 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:21.057107 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:21.057141 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:21.084165 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:21.084195 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:21.144652 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:21.144688 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:23.662474 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:23.672891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:23.672972 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:23.728294 2088124 cri.go:89] found id: ""
	I1216 04:13:23.728317 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.728325 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:23.728332 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:23.728390 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:23.774385 2088124 cri.go:89] found id: ""
	I1216 04:13:23.774414 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.774423 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:23.774429 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:23.774496 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:23.804506 2088124 cri.go:89] found id: ""
	I1216 04:13:23.804531 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.804553 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:23.804560 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:23.804618 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:23.831638 2088124 cri.go:89] found id: ""
	I1216 04:13:23.831674 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.831683 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:23.831689 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:23.831766 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:23.856129 2088124 cri.go:89] found id: ""
	I1216 04:13:23.856155 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.856164 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:23.856172 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:23.856251 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:23.884761 2088124 cri.go:89] found id: ""
	I1216 04:13:23.884787 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.884796 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:23.884803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:23.884905 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:23.913711 2088124 cri.go:89] found id: ""
	I1216 04:13:23.913736 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.913745 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:23.913752 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:23.913810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:23.938590 2088124 cri.go:89] found id: ""
	I1216 04:13:23.938616 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.938625 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:23.938635 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:23.938646 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:23.993972 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:23.994007 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:24.012474 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:24.012506 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:24.080748 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:24.071242    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.071643    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.073210    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.074640    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.075561    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:24.071242    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.071643    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.073210    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.074640    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.075561    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:24.080778 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:24.080791 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:24.110317 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:24.110357 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:26.644643 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:26.655360 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:26.655430 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:26.679082 2088124 cri.go:89] found id: ""
	I1216 04:13:26.679108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.679117 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:26.679124 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:26.679184 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:26.727361 2088124 cri.go:89] found id: ""
	I1216 04:13:26.727389 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.727399 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:26.727405 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:26.727466 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:26.784659 2088124 cri.go:89] found id: ""
	I1216 04:13:26.784688 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.784697 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:26.784703 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:26.784765 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:26.813210 2088124 cri.go:89] found id: ""
	I1216 04:13:26.813237 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.813246 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:26.813253 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:26.813336 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:26.837930 2088124 cri.go:89] found id: ""
	I1216 04:13:26.837955 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.837963 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:26.837970 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:26.838031 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:26.864344 2088124 cri.go:89] found id: ""
	I1216 04:13:26.864369 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.864378 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:26.864385 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:26.864461 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:26.889169 2088124 cri.go:89] found id: ""
	I1216 04:13:26.889195 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.889207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:26.889214 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:26.889298 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:26.913569 2088124 cri.go:89] found id: ""
	I1216 04:13:26.913596 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.913604 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:26.913614 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:26.913644 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:26.929642 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:26.929671 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:26.992130 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:26.983971    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.984717    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986365    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986828    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.988289    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:26.983971    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.984717    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986365    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986828    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.988289    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:26.992154 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:26.992166 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:27.018253 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:27.018291 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:27.047464 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:27.047492 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:29.603162 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:29.613926 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:29.614005 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:29.639664 2088124 cri.go:89] found id: ""
	I1216 04:13:29.639690 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.639700 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:29.639706 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:29.639773 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:29.664287 2088124 cri.go:89] found id: ""
	I1216 04:13:29.664313 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.664322 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:29.664328 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:29.664391 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:29.715854 2088124 cri.go:89] found id: ""
	I1216 04:13:29.715881 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.715890 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:29.715896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:29.715957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:29.775256 2088124 cri.go:89] found id: ""
	I1216 04:13:29.775283 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.775291 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:29.775298 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:29.775359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:29.800860 2088124 cri.go:89] found id: ""
	I1216 04:13:29.800884 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.800893 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:29.800899 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:29.800966 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:29.826179 2088124 cri.go:89] found id: ""
	I1216 04:13:29.826201 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.826209 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:29.826216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:29.826287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:29.851587 2088124 cri.go:89] found id: ""
	I1216 04:13:29.851657 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.851668 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:29.851675 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:29.851771 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:29.876290 2088124 cri.go:89] found id: ""
	I1216 04:13:29.876317 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.876327 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:29.876336 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:29.876351 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:29.934758 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:29.934795 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:29.950904 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:29.950934 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:30.063379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:30.052801    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.053836    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.055700    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.056432    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.058410    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:30.052801    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.053836    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.055700    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.056432    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.058410    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:30.063402 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:30.063416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:30.093513 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:30.093550 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:32.623683 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:32.634450 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:32.634522 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:32.659386 2088124 cri.go:89] found id: ""
	I1216 04:13:32.659411 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.659419 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:32.659426 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:32.659488 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:32.700370 2088124 cri.go:89] found id: ""
	I1216 04:13:32.700397 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.700406 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:32.700413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:32.700483 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:32.757584 2088124 cri.go:89] found id: ""
	I1216 04:13:32.757606 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.757615 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:32.757621 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:32.757683 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:32.803420 2088124 cri.go:89] found id: ""
	I1216 04:13:32.803445 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.803454 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:32.803460 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:32.803523 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:32.828842 2088124 cri.go:89] found id: ""
	I1216 04:13:32.828866 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.828875 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:32.828881 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:32.828949 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:32.853353 2088124 cri.go:89] found id: ""
	I1216 04:13:32.853380 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.853389 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:32.853398 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:32.853501 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:32.877408 2088124 cri.go:89] found id: ""
	I1216 04:13:32.877435 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.877444 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:32.877451 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:32.877510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:32.901743 2088124 cri.go:89] found id: ""
	I1216 04:13:32.901770 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.901780 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:32.901790 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:32.901804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:32.967369 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:32.958798    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.959484    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.960984    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.961467    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.962939    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:32.958798    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.959484    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.960984    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.961467    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.962939    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:32.967394 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:32.967408 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:32.992952 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:32.992987 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:33.022501 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:33.022532 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:33.078417 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:33.078454 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:35.594569 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:35.607352 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:35.607423 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:35.637370 2088124 cri.go:89] found id: ""
	I1216 04:13:35.637394 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.637403 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:35.637409 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:35.637468 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:35.661404 2088124 cri.go:89] found id: ""
	I1216 04:13:35.661428 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.661437 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:35.661443 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:35.661499 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:35.700087 2088124 cri.go:89] found id: ""
	I1216 04:13:35.700110 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.700118 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:35.700124 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:35.700185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:35.753090 2088124 cri.go:89] found id: ""
	I1216 04:13:35.753163 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.753187 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:35.753207 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:35.753322 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:35.783667 2088124 cri.go:89] found id: ""
	I1216 04:13:35.783693 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.783701 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:35.783707 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:35.783783 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:35.808401 2088124 cri.go:89] found id: ""
	I1216 04:13:35.808426 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.808434 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:35.808457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:35.808518 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:35.832934 2088124 cri.go:89] found id: ""
	I1216 04:13:35.833001 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.833014 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:35.833022 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:35.833080 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:35.857857 2088124 cri.go:89] found id: ""
	I1216 04:13:35.857892 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.857902 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:35.857911 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:35.857928 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:35.888212 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:35.888240 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:35.944155 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:35.944191 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:35.960968 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:35.960997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:36.037726 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:36.028639    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.029470    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.030504    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.031100    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.033315    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:36.028639    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.029470    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.030504    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.031100    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.033315    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:36.037753 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:36.037768 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:38.565516 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:38.576078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:38.576153 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:38.603519 2088124 cri.go:89] found id: ""
	I1216 04:13:38.603550 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.603564 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:38.603571 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:38.603642 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:38.630185 2088124 cri.go:89] found id: ""
	I1216 04:13:38.630212 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.630222 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:38.630228 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:38.630295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:38.656496 2088124 cri.go:89] found id: ""
	I1216 04:13:38.656518 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.656527 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:38.656532 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:38.656597 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:38.691354 2088124 cri.go:89] found id: ""
	I1216 04:13:38.691375 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.691384 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:38.691390 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:38.691448 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:38.727377 2088124 cri.go:89] found id: ""
	I1216 04:13:38.727451 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.727476 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:38.727495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:38.727607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:38.792847 2088124 cri.go:89] found id: ""
	I1216 04:13:38.792924 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.792949 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:38.792969 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:38.793082 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:38.819253 2088124 cri.go:89] found id: ""
	I1216 04:13:38.819326 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.819351 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:38.819369 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:38.819479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:38.844536 2088124 cri.go:89] found id: ""
	I1216 04:13:38.844560 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.844569 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:38.844578 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:38.844590 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:38.903226 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:38.903264 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:38.919524 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:38.919556 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:38.983586 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:38.974559    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.975311    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.976854    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.977457    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.979078    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:38.974559    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.975311    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.976854    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.977457    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.979078    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:38.983611 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:38.983625 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:39.009510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:39.009548 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:39.908601 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:13:39.971867 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:39.972017 2088124 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:41.538728 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:41.550610 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:41.550686 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:41.580358 2088124 cri.go:89] found id: ""
	I1216 04:13:41.580388 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.580398 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:41.580405 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:41.580476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:41.609251 2088124 cri.go:89] found id: ""
	I1216 04:13:41.609323 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.609346 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:41.609360 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:41.609437 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:41.634677 2088124 cri.go:89] found id: ""
	I1216 04:13:41.634714 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.634724 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:41.634731 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:41.634811 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:41.660492 2088124 cri.go:89] found id: ""
	I1216 04:13:41.660531 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.660541 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:41.660555 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:41.660624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:41.706922 2088124 cri.go:89] found id: ""
	I1216 04:13:41.706958 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.706967 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:41.706974 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:41.707062 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:41.771121 2088124 cri.go:89] found id: ""
	I1216 04:13:41.771150 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.771160 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:41.771167 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:41.771228 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:41.798371 2088124 cri.go:89] found id: ""
	I1216 04:13:41.798409 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.798418 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:41.798424 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:41.798505 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:41.825080 2088124 cri.go:89] found id: ""
	I1216 04:13:41.825108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.825118 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:41.825128 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:41.825142 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:41.881228 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:41.881264 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:41.897224 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:41.897252 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:41.962985 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:41.954556    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.955066    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.956801    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.957150    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.958760    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:41.954556    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.955066    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.956801    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.957150    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.958760    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:41.963011 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:41.963024 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:41.988969 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:41.989006 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:44.532418 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:44.542803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:44.542915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:44.568416 2088124 cri.go:89] found id: ""
	I1216 04:13:44.568439 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.568457 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:44.568463 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:44.568522 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:44.594143 2088124 cri.go:89] found id: ""
	I1216 04:13:44.594169 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.594179 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:44.594186 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:44.594247 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:44.618788 2088124 cri.go:89] found id: ""
	I1216 04:13:44.618819 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.618828 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:44.618835 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:44.618895 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:44.644302 2088124 cri.go:89] found id: ""
	I1216 04:13:44.644325 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.644333 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:44.644340 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:44.644398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:44.669819 2088124 cri.go:89] found id: ""
	I1216 04:13:44.669842 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.669849 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:44.669855 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:44.669924 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:44.725552 2088124 cri.go:89] found id: ""
	I1216 04:13:44.725575 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.725583 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:44.725589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:44.725650 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:44.765386 2088124 cri.go:89] found id: ""
	I1216 04:13:44.765408 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.765426 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:44.765432 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:44.765491 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:44.793682 2088124 cri.go:89] found id: ""
	I1216 04:13:44.793763 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.793788 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:44.793827 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:44.793857 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:44.852432 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:44.852473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:44.868492 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:44.868520 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:44.931865 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:44.923147    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.923927    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.925715    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.926197    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.927722    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:44.923147    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.923927    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.925715    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.926197    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.927722    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:44.931889 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:44.931903 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:44.957522 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:44.957557 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:47.485499 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:47.496279 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:47.496356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:47.520654 2088124 cri.go:89] found id: ""
	I1216 04:13:47.520681 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.520690 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:47.520696 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:47.520761 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:47.551944 2088124 cri.go:89] found id: ""
	I1216 04:13:47.551978 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.551987 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:47.552001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:47.552065 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:47.578411 2088124 cri.go:89] found id: ""
	I1216 04:13:47.578438 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.578450 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:47.578457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:47.578519 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:47.604018 2088124 cri.go:89] found id: ""
	I1216 04:13:47.604041 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.604049 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:47.604055 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:47.604112 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:47.629467 2088124 cri.go:89] found id: ""
	I1216 04:13:47.629491 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.629499 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:47.629506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:47.629567 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:47.658252 2088124 cri.go:89] found id: ""
	I1216 04:13:47.658280 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.658289 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:47.658295 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:47.658362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:47.683444 2088124 cri.go:89] found id: ""
	I1216 04:13:47.683472 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.683481 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:47.683487 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:47.683548 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:47.745597 2088124 cri.go:89] found id: ""
	I1216 04:13:47.745620 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.745629 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:47.745638 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:47.745650 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:47.788108 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:47.788134 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:47.844259 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:47.844292 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:47.860046 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:47.860078 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:47.931100 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:47.922584    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.923485    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.924699    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.925424    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.927031    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:47.922584    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.923485    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.924699    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.925424    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.927031    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:47.931125 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:47.931139 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:50.458157 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:50.468844 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:50.468915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:50.493698 2088124 cri.go:89] found id: ""
	I1216 04:13:50.493725 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.493735 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:50.493741 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:50.493799 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:50.518623 2088124 cri.go:89] found id: ""
	I1216 04:13:50.518652 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.518664 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:50.518671 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:50.518737 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:50.543940 2088124 cri.go:89] found id: ""
	I1216 04:13:50.543969 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.543978 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:50.543984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:50.544043 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:50.570246 2088124 cri.go:89] found id: ""
	I1216 04:13:50.570283 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.570292 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:50.570299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:50.570374 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:50.596855 2088124 cri.go:89] found id: ""
	I1216 04:13:50.596884 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.596893 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:50.596900 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:50.596965 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:50.622325 2088124 cri.go:89] found id: ""
	I1216 04:13:50.622352 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.622361 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:50.622368 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:50.622428 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:50.647658 2088124 cri.go:89] found id: ""
	I1216 04:13:50.647683 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.647691 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:50.647698 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:50.647760 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:50.672119 2088124 cri.go:89] found id: ""
	I1216 04:13:50.672156 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.672166 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:50.672176 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:50.672187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:50.741830 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:50.741871 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:50.758886 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:50.758917 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:50.843759 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:50.834975    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.835406    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837202    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837673    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.839159    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:50.834975    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.835406    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837202    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837673    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.839159    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:50.843782 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:50.843795 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:50.870242 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:50.870278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:51.134849 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:13:51.199925 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:51.200071 2088124 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:51.205122 2088124 out.go:179] * Enabled addons: 
	I1216 04:13:51.208001 2088124 addons.go:530] duration metric: took 1m54.35940748s for enable addons: enabled=[]
	I1216 04:13:53.399835 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:53.410221 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:53.410292 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:53.442995 2088124 cri.go:89] found id: ""
	I1216 04:13:53.443019 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.443028 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:53.443034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:53.443119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:53.469085 2088124 cri.go:89] found id: ""
	I1216 04:13:53.469108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.469116 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:53.469122 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:53.469185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:53.492673 2088124 cri.go:89] found id: ""
	I1216 04:13:53.492741 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.492764 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:53.492778 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:53.492851 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:53.519461 2088124 cri.go:89] found id: ""
	I1216 04:13:53.519484 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.519493 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:53.519499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:53.519559 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:53.544555 2088124 cri.go:89] found id: ""
	I1216 04:13:53.544578 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.544587 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:53.544593 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:53.544655 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:53.570476 2088124 cri.go:89] found id: ""
	I1216 04:13:53.570499 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.570508 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:53.570514 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:53.570576 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:53.598792 2088124 cri.go:89] found id: ""
	I1216 04:13:53.598814 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.598822 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:53.598828 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:53.598894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:53.627454 2088124 cri.go:89] found id: ""
	I1216 04:13:53.627477 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.627485 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:53.627494 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:53.627505 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:53.684461 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:53.684541 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:53.709962 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:53.710041 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:53.803419 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:53.793865    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.794942    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796672    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796978    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.798420    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:53.793865    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.794942    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796672    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796978    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.798420    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:53.803444 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:53.803462 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:53.829615 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:53.829652 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:56.358195 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:56.368722 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:56.368794 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:56.393334 2088124 cri.go:89] found id: ""
	I1216 04:13:56.393358 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.393367 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:56.393373 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:56.393440 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:56.417912 2088124 cri.go:89] found id: ""
	I1216 04:13:56.417935 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.417944 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:56.417983 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:56.418062 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:56.445420 2088124 cri.go:89] found id: ""
	I1216 04:13:56.445451 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.445461 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:56.445467 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:56.445526 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:56.469454 2088124 cri.go:89] found id: ""
	I1216 04:13:56.469478 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.469487 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:56.469493 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:56.469552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:56.494121 2088124 cri.go:89] found id: ""
	I1216 04:13:56.494145 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.494153 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:56.494165 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:56.494225 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:56.517578 2088124 cri.go:89] found id: ""
	I1216 04:13:56.517602 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.517611 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:56.517637 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:56.517700 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:56.544866 2088124 cri.go:89] found id: ""
	I1216 04:13:56.544891 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.544899 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:56.544941 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:56.545022 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:56.573759 2088124 cri.go:89] found id: ""
	I1216 04:13:56.573787 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.573796 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:56.573805 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:56.573817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:56.599163 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:56.599202 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:56.630921 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:56.630948 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:56.688477 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:56.688553 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:56.720603 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:56.720634 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:56.828200 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:56.818507    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.819366    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821131    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821683    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.823608    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:56.818507    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.819366    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821131    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821683    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.823608    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:59.328466 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:59.339589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:59.339664 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:59.364346 2088124 cri.go:89] found id: ""
	I1216 04:13:59.364373 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.364382 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:59.364389 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:59.364494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:59.393412 2088124 cri.go:89] found id: ""
	I1216 04:13:59.393480 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.393503 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:59.393516 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:59.393590 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:59.422012 2088124 cri.go:89] found id: ""
	I1216 04:13:59.422039 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.422048 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:59.422055 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:59.422111 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:59.447252 2088124 cri.go:89] found id: ""
	I1216 04:13:59.447280 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.447289 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:59.447301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:59.447362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:59.473224 2088124 cri.go:89] found id: ""
	I1216 04:13:59.473253 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.473262 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:59.473269 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:59.473333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:59.498117 2088124 cri.go:89] found id: ""
	I1216 04:13:59.498142 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.498151 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:59.498157 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:59.498218 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:59.531960 2088124 cri.go:89] found id: ""
	I1216 04:13:59.531983 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.531992 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:59.531998 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:59.532064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:59.555530 2088124 cri.go:89] found id: ""
	I1216 04:13:59.555557 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.555567 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:59.555586 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:59.555597 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:59.587567 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:59.587594 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:59.642770 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:59.642808 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:59.658670 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:59.658698 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:59.758071 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:59.747797    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.748997    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.749964    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.751637    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.752201    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:59.747797    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.748997    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.749964    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.751637    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.752201    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:59.758096 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:59.758109 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:02.297267 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:02.308025 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:02.308094 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:02.332912 2088124 cri.go:89] found id: ""
	I1216 04:14:02.332938 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.332947 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:02.332953 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:02.333015 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:02.358723 2088124 cri.go:89] found id: ""
	I1216 04:14:02.358746 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.358754 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:02.358760 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:02.358820 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:02.384845 2088124 cri.go:89] found id: ""
	I1216 04:14:02.384869 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.384878 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:02.384884 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:02.384947 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:02.411300 2088124 cri.go:89] found id: ""
	I1216 04:14:02.411327 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.411337 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:02.411343 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:02.411401 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:02.436448 2088124 cri.go:89] found id: ""
	I1216 04:14:02.436490 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.436500 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:02.436506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:02.436568 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:02.462003 2088124 cri.go:89] found id: ""
	I1216 04:14:02.462030 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.462039 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:02.462045 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:02.462115 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:02.487374 2088124 cri.go:89] found id: ""
	I1216 04:14:02.487398 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.487407 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:02.487414 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:02.487473 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:02.513515 2088124 cri.go:89] found id: ""
	I1216 04:14:02.513541 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.513549 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:02.513559 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:02.513574 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:02.569398 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:02.569439 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:02.585943 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:02.585986 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:02.652956 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:02.644316    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.645186    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.646890    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.647275    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.648908    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:02.644316    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.645186    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.646890    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.647275    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.648908    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:02.653021 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:02.653040 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:02.678261 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:02.678296 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:05.269784 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:05.280500 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:05.280584 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:05.305398 2088124 cri.go:89] found id: ""
	I1216 04:14:05.305424 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.305432 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:05.305439 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:05.305498 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:05.331233 2088124 cri.go:89] found id: ""
	I1216 04:14:05.331256 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.331264 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:05.331270 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:05.331329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:05.356501 2088124 cri.go:89] found id: ""
	I1216 04:14:05.356527 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.356537 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:05.356543 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:05.356605 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:05.383678 2088124 cri.go:89] found id: ""
	I1216 04:14:05.383706 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.383714 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:05.383720 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:05.383819 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:05.408800 2088124 cri.go:89] found id: ""
	I1216 04:14:05.408826 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.408835 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:05.408842 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:05.408900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:05.437636 2088124 cri.go:89] found id: ""
	I1216 04:14:05.437664 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.437673 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:05.437680 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:05.437738 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:05.463588 2088124 cri.go:89] found id: ""
	I1216 04:14:05.463619 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.463628 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:05.463635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:05.463707 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:05.492371 2088124 cri.go:89] found id: ""
	I1216 04:14:05.492399 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.492409 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:05.492418 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:05.492430 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:05.548250 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:05.548287 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:05.564063 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:05.564088 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:05.632904 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:05.624351    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.625146    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.626904    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.627499    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.629025    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:05.624351    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.625146    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.626904    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.627499    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.629025    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:05.632926 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:05.632939 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:05.659343 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:05.659376 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:08.201168 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:08.211739 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:08.211822 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:08.236069 2088124 cri.go:89] found id: ""
	I1216 04:14:08.236097 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.236106 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:08.236118 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:08.236177 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:08.261051 2088124 cri.go:89] found id: ""
	I1216 04:14:08.261075 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.261083 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:08.261089 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:08.261150 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:08.285569 2088124 cri.go:89] found id: ""
	I1216 04:14:08.285592 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.285600 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:08.285606 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:08.285667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:08.311218 2088124 cri.go:89] found id: ""
	I1216 04:14:08.311258 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.311266 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:08.311273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:08.311366 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:08.345673 2088124 cri.go:89] found id: ""
	I1216 04:14:08.345697 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.345706 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:08.345713 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:08.345776 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:08.370418 2088124 cri.go:89] found id: ""
	I1216 04:14:08.370441 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.370449 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:08.370456 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:08.370513 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:08.395107 2088124 cri.go:89] found id: ""
	I1216 04:14:08.395170 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.395196 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:08.395215 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:08.395299 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:08.419032 2088124 cri.go:89] found id: ""
	I1216 04:14:08.419085 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.419094 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:08.419104 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:08.419115 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:08.475411 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:08.475448 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:08.491357 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:08.491391 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:08.557388 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:08.548702    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.549457    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551263    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551890    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.553470    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:08.548702    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.549457    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551263    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551890    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.553470    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:08.557412 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:08.557426 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:08.582743 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:08.582777 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:11.111145 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:11.123009 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:11.123095 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:11.150909 2088124 cri.go:89] found id: ""
	I1216 04:14:11.150934 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.150942 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:11.150949 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:11.151075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:11.182574 2088124 cri.go:89] found id: ""
	I1216 04:14:11.182600 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.182610 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:11.182616 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:11.182719 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:11.208283 2088124 cri.go:89] found id: ""
	I1216 04:14:11.208310 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.208319 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:11.208325 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:11.208417 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:11.237024 2088124 cri.go:89] found id: ""
	I1216 04:14:11.237052 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.237061 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:11.237069 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:11.237132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:11.265167 2088124 cri.go:89] found id: ""
	I1216 04:14:11.265189 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.265197 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:11.265203 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:11.265261 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:11.290122 2088124 cri.go:89] found id: ""
	I1216 04:14:11.290144 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.290152 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:11.290159 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:11.290217 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:11.317188 2088124 cri.go:89] found id: ""
	I1216 04:14:11.317211 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.317219 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:11.317225 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:11.317304 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:11.342140 2088124 cri.go:89] found id: ""
	I1216 04:14:11.342164 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.342173 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:11.342206 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:11.342225 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:11.368021 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:11.368058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:11.397287 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:11.397318 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:11.453124 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:11.453158 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:11.468881 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:11.468910 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:11.535360 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:11.526322    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.526895    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.528582    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.529258    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.530769    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:11.526322    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.526895    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.528582    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.529258    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.530769    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:14.036278 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:14.046954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:14.047104 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:14.072898 2088124 cri.go:89] found id: ""
	I1216 04:14:14.072923 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.072932 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:14.072938 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:14.072998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:14.098005 2088124 cri.go:89] found id: ""
	I1216 04:14:14.098041 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.098049 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:14.098056 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:14.098123 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:14.125919 2088124 cri.go:89] found id: ""
	I1216 04:14:14.125945 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.125954 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:14.125961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:14.126068 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:14.151392 2088124 cri.go:89] found id: ""
	I1216 04:14:14.151416 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.151424 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:14.151430 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:14.151494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:14.181023 2088124 cri.go:89] found id: ""
	I1216 04:14:14.181054 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.181064 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:14.181070 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:14.181139 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:14.206141 2088124 cri.go:89] found id: ""
	I1216 04:14:14.206166 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.206175 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:14.206181 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:14.206250 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:14.230051 2088124 cri.go:89] found id: ""
	I1216 04:14:14.230084 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.230093 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:14.230098 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:14.230183 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:14.255362 2088124 cri.go:89] found id: ""
	I1216 04:14:14.255388 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.255412 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:14.255423 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:14.255434 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:14.310536 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:14.310573 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:14.326390 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:14.326478 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:14.389470 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:14.380411    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.381325    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383077    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383571    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.385261    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:14.380411    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.381325    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383077    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383571    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.385261    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:14.389493 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:14.389512 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:14.415767 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:14.415804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:16.946959 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:16.978797 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:16.978873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:17.023928 2088124 cri.go:89] found id: ""
	I1216 04:14:17.024005 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.024022 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:17.024030 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:17.024092 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:17.049994 2088124 cri.go:89] found id: ""
	I1216 04:14:17.050024 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.050033 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:17.050040 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:17.050122 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:17.075095 2088124 cri.go:89] found id: ""
	I1216 04:14:17.075120 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.075128 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:17.075134 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:17.075195 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:17.103161 2088124 cri.go:89] found id: ""
	I1216 04:14:17.103189 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.103209 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:17.103216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:17.103687 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:17.139217 2088124 cri.go:89] found id: ""
	I1216 04:14:17.139246 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.139255 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:17.139261 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:17.139325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:17.170063 2088124 cri.go:89] found id: ""
	I1216 04:14:17.170091 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.170102 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:17.170108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:17.170186 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:17.195843 2088124 cri.go:89] found id: ""
	I1216 04:14:17.195869 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.195879 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:17.195885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:17.195966 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:17.221935 2088124 cri.go:89] found id: ""
	I1216 04:14:17.221962 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.221971 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:17.222001 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:17.222019 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:17.278612 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:17.278650 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:17.295004 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:17.295076 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:17.359742 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:17.351174    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.351977    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.353668    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.354101    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.355803    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:17.351174    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.351977    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.353668    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.354101    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.355803    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:17.359766 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:17.359779 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:17.385281 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:17.385316 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:19.913504 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:19.924126 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:19.924223 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:19.981102 2088124 cri.go:89] found id: ""
	I1216 04:14:19.981182 2088124 logs.go:282] 0 containers: []
	W1216 04:14:19.981204 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:19.981223 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:19.981319 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:20.025801 2088124 cri.go:89] found id: ""
	I1216 04:14:20.025875 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.025897 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:20.025918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:20.026010 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:20.057062 2088124 cri.go:89] found id: ""
	I1216 04:14:20.057088 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.057097 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:20.057103 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:20.057168 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:20.082749 2088124 cri.go:89] found id: ""
	I1216 04:14:20.082774 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.082783 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:20.082790 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:20.082854 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:20.109626 2088124 cri.go:89] found id: ""
	I1216 04:14:20.109653 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.109663 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:20.109670 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:20.109731 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:20.134934 2088124 cri.go:89] found id: ""
	I1216 04:14:20.134957 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.134980 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:20.134988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:20.135088 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:20.161170 2088124 cri.go:89] found id: ""
	I1216 04:14:20.161197 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.161206 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:20.161213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:20.161299 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:20.187553 2088124 cri.go:89] found id: ""
	I1216 04:14:20.187578 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.187587 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:20.187597 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:20.187629 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:20.255987 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:20.246960    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.247520    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.249493    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.250032    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.251482    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:20.246960    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.247520    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.249493    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.250032    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.251482    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:20.256011 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:20.256024 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:20.281257 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:20.281331 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:20.310693 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:20.310724 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:20.367395 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:20.367436 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:22.883831 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:22.894924 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:22.894999 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:22.920332 2088124 cri.go:89] found id: ""
	I1216 04:14:22.920359 2088124 logs.go:282] 0 containers: []
	W1216 04:14:22.920379 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:22.920386 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:22.920445 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:22.977215 2088124 cri.go:89] found id: ""
	I1216 04:14:22.977243 2088124 logs.go:282] 0 containers: []
	W1216 04:14:22.977252 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:22.977258 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:22.977317 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:23.028698 2088124 cri.go:89] found id: ""
	I1216 04:14:23.028723 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.028732 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:23.028739 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:23.028804 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:23.055098 2088124 cri.go:89] found id: ""
	I1216 04:14:23.055124 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.055133 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:23.055140 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:23.055209 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:23.080450 2088124 cri.go:89] found id: ""
	I1216 04:14:23.080483 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.080493 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:23.080499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:23.080559 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:23.105251 2088124 cri.go:89] found id: ""
	I1216 04:14:23.105275 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.105284 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:23.105296 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:23.105355 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:23.130544 2088124 cri.go:89] found id: ""
	I1216 04:14:23.130573 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.130588 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:23.130594 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:23.130653 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:23.155787 2088124 cri.go:89] found id: ""
	I1216 04:14:23.155863 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.155879 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:23.155889 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:23.155901 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:23.184285 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:23.184315 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:23.240021 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:23.240058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:23.255934 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:23.255969 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:23.324390 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:23.316382    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.316885    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.318697    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.319197    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.320361    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:23.316382    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.316885    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.318697    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.319197    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.320361    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:23.324415 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:23.324432 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:25.850349 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:25.861084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:25.861157 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:25.885912 2088124 cri.go:89] found id: ""
	I1216 04:14:25.885939 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.885947 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:25.885954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:25.886015 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:25.914385 2088124 cri.go:89] found id: ""
	I1216 04:14:25.914408 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.914416 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:25.914422 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:25.914482 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:25.957379 2088124 cri.go:89] found id: ""
	I1216 04:14:25.957406 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.957415 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:25.957421 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:25.957480 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:26.020008 2088124 cri.go:89] found id: ""
	I1216 04:14:26.020036 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.020045 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:26.020051 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:26.020118 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:26.047424 2088124 cri.go:89] found id: ""
	I1216 04:14:26.047452 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.047461 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:26.047468 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:26.047534 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:26.073161 2088124 cri.go:89] found id: ""
	I1216 04:14:26.073187 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.073208 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:26.073216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:26.073277 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:26.103238 2088124 cri.go:89] found id: ""
	I1216 04:14:26.103260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.103268 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:26.103274 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:26.103337 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:26.128964 2088124 cri.go:89] found id: ""
	I1216 04:14:26.128993 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.129004 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:26.129013 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:26.129025 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:26.185309 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:26.185350 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:26.201116 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:26.201191 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:26.261346 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:26.253589    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.254367    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255430    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255945    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.257599    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:26.253589    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.254367    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255430    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255945    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.257599    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:26.261367 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:26.261379 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:26.286659 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:26.286693 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:28.816260 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:28.826799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:28.826873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:28.851396 2088124 cri.go:89] found id: ""
	I1216 04:14:28.851425 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.851435 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:28.851441 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:28.851503 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:28.875518 2088124 cri.go:89] found id: ""
	I1216 04:14:28.875541 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.875550 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:28.875556 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:28.875614 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:28.904430 2088124 cri.go:89] found id: ""
	I1216 04:14:28.904454 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.904462 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:28.904476 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:28.904537 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:28.929129 2088124 cri.go:89] found id: ""
	I1216 04:14:28.929153 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.929162 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:28.929169 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:28.929228 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:28.966014 2088124 cri.go:89] found id: ""
	I1216 04:14:28.966042 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.966051 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:28.966057 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:28.966123 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:29.025945 2088124 cri.go:89] found id: ""
	I1216 04:14:29.025972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.025988 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:29.025995 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:29.026064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:29.051899 2088124 cri.go:89] found id: ""
	I1216 04:14:29.051935 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.051946 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:29.051952 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:29.052023 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:29.080317 2088124 cri.go:89] found id: ""
	I1216 04:14:29.080341 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.080351 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:29.080361 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:29.080373 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:29.135930 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:29.135967 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:29.154187 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:29.154216 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:29.221073 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:29.212783    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.213403    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.214843    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.215170    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.216622    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:29.212783    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.213403    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.214843    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.215170    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.216622    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:29.221096 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:29.221111 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:29.246641 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:29.246676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:31.779202 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:31.790954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:31.791029 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:31.817812 2088124 cri.go:89] found id: ""
	I1216 04:14:31.817897 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.817925 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:31.817946 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:31.818067 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:31.842726 2088124 cri.go:89] found id: ""
	I1216 04:14:31.842753 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.842762 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:31.842769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:31.842832 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:31.868497 2088124 cri.go:89] found id: ""
	I1216 04:14:31.868523 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.868532 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:31.868538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:31.868602 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:31.898624 2088124 cri.go:89] found id: ""
	I1216 04:14:31.898646 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.898655 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:31.898662 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:31.898720 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:31.924967 2088124 cri.go:89] found id: ""
	I1216 04:14:31.924993 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.925003 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:31.925011 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:31.925074 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:31.966946 2088124 cri.go:89] found id: ""
	I1216 04:14:31.966972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.966981 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:31.966988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:31.967075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:31.999136 2088124 cri.go:89] found id: ""
	I1216 04:14:31.999162 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.999170 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:31.999177 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:31.999248 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:32.037224 2088124 cri.go:89] found id: ""
	I1216 04:14:32.037260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:32.037269 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:32.037280 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:32.037292 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:32.098221 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:32.098257 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:32.114315 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:32.114346 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:32.179522 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:32.170571    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.171240    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.172913    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.173537    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.175422    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:32.170571    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.171240    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.172913    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.173537    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.175422    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:32.179546 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:32.179598 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:32.205901 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:32.205937 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:34.736487 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:34.747033 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:34.747125 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:34.774783 2088124 cri.go:89] found id: ""
	I1216 04:14:34.774808 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.774817 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:34.774826 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:34.774892 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:34.804248 2088124 cri.go:89] found id: ""
	I1216 04:14:34.804272 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.804281 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:34.804294 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:34.804356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:34.829461 2088124 cri.go:89] found id: ""
	I1216 04:14:34.829485 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.829493 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:34.829499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:34.829560 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:34.857116 2088124 cri.go:89] found id: ""
	I1216 04:14:34.857141 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.857151 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:34.857157 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:34.857219 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:34.882336 2088124 cri.go:89] found id: ""
	I1216 04:14:34.882359 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.882367 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:34.882373 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:34.882434 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:34.907931 2088124 cri.go:89] found id: ""
	I1216 04:14:34.907954 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.907962 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:34.907969 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:34.908027 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:34.956047 2088124 cri.go:89] found id: ""
	I1216 04:14:34.956069 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.956077 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:34.956084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:34.956145 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:35.024159 2088124 cri.go:89] found id: ""
	I1216 04:14:35.024183 2088124 logs.go:282] 0 containers: []
	W1216 04:14:35.024197 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:35.024207 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:35.024218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:35.052560 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:35.052632 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:35.120169 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:35.109992    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.110996    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.112972    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.113360    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.115151    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:35.109992    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.110996    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.112972    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.113360    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.115151    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:35.120193 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:35.120206 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:35.148539 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:35.148572 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:35.177137 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:35.177163 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:37.736828 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:37.748034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:37.748119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:37.774072 2088124 cri.go:89] found id: ""
	I1216 04:14:37.774096 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.774105 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:37.774113 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:37.774174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:37.798854 2088124 cri.go:89] found id: ""
	I1216 04:14:37.798879 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.798887 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:37.798893 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:37.798953 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:37.824863 2088124 cri.go:89] found id: ""
	I1216 04:14:37.824889 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.824898 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:37.824905 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:37.824995 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:37.849318 2088124 cri.go:89] found id: ""
	I1216 04:14:37.849340 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.849348 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:37.849354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:37.849418 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:37.874246 2088124 cri.go:89] found id: ""
	I1216 04:14:37.874269 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.874277 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:37.874285 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:37.874343 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:37.900978 2088124 cri.go:89] found id: ""
	I1216 04:14:37.901002 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.901010 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:37.901016 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:37.901076 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:37.929331 2088124 cri.go:89] found id: ""
	I1216 04:14:37.929360 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.929370 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:37.929376 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:37.929440 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:37.969527 2088124 cri.go:89] found id: ""
	I1216 04:14:37.969556 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.969564 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:37.969573 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:37.969585 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:38.009528 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:38.009566 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:38.055850 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:38.055880 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:38.113260 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:38.113301 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:38.129810 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:38.129846 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:38.195392 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:38.187231    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.187971    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.189569    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.190023    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.191590    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:38.187231    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.187971    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.189569    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.190023    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.191590    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:40.695695 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:40.706489 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:40.706566 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:40.733370 2088124 cri.go:89] found id: ""
	I1216 04:14:40.733400 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.733409 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:40.733416 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:40.733476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:40.760997 2088124 cri.go:89] found id: ""
	I1216 04:14:40.761027 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.761037 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:40.761043 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:40.761106 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:40.785757 2088124 cri.go:89] found id: ""
	I1216 04:14:40.785785 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.785793 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:40.785799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:40.785859 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:40.810917 2088124 cri.go:89] found id: ""
	I1216 04:14:40.810946 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.810954 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:40.810961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:40.811021 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:40.837261 2088124 cri.go:89] found id: ""
	I1216 04:14:40.837289 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.837298 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:40.837306 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:40.837367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:40.865095 2088124 cri.go:89] found id: ""
	I1216 04:14:40.865124 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.865133 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:40.865139 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:40.865197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:40.893132 2088124 cri.go:89] found id: ""
	I1216 04:14:40.893156 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.893164 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:40.893170 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:40.893230 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:40.917368 2088124 cri.go:89] found id: ""
	I1216 04:14:40.917390 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.917398 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:40.917407 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:40.917418 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:40.988706 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:40.988789 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:41.026114 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:41.026141 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:41.097192 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:41.088410    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.089258    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.090961    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.091663    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.093296    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:41.088410    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.089258    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.090961    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.091663    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.093296    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:41.097218 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:41.097232 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:41.122894 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:41.122929 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:43.655609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:43.666076 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:43.666148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:43.697517 2088124 cri.go:89] found id: ""
	I1216 04:14:43.697542 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.697550 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:43.697557 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:43.697617 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:43.722700 2088124 cri.go:89] found id: ""
	I1216 04:14:43.722727 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.722737 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:43.722743 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:43.722811 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:43.751469 2088124 cri.go:89] found id: ""
	I1216 04:14:43.751496 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.751509 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:43.751516 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:43.751577 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:43.776779 2088124 cri.go:89] found id: ""
	I1216 04:14:43.776804 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.776812 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:43.776818 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:43.776876 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:43.801004 2088124 cri.go:89] found id: ""
	I1216 04:14:43.801028 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.801037 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:43.801044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:43.801131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:43.825723 2088124 cri.go:89] found id: ""
	I1216 04:14:43.825747 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.825756 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:43.825763 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:43.825823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:43.854440 2088124 cri.go:89] found id: ""
	I1216 04:14:43.854464 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.854473 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:43.854479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:43.854537 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:43.881228 2088124 cri.go:89] found id: ""
	I1216 04:14:43.881251 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.881261 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:43.881270 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:43.881282 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:43.908258 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:43.908330 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:43.975235 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:43.975273 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:44.032765 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:44.032798 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:44.097769 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:44.088888    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.089678    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.091397    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.092058    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.093573    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:44.088888    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.089678    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.091397    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.092058    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.093573    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:44.097791 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:44.097814 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:46.624214 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:46.634860 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:46.634939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:46.662490 2088124 cri.go:89] found id: ""
	I1216 04:14:46.662518 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.662528 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:46.662534 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:46.662598 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:46.687532 2088124 cri.go:89] found id: ""
	I1216 04:14:46.687558 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.687567 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:46.687574 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:46.687639 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:46.711951 2088124 cri.go:89] found id: ""
	I1216 04:14:46.711978 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.711988 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:46.711994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:46.712054 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:46.742207 2088124 cri.go:89] found id: ""
	I1216 04:14:46.742241 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.742250 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:46.742257 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:46.742331 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:46.766943 2088124 cri.go:89] found id: ""
	I1216 04:14:46.766972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.766981 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:46.766988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:46.767070 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:46.792400 2088124 cri.go:89] found id: ""
	I1216 04:14:46.792432 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.792442 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:46.792455 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:46.792533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:46.817511 2088124 cri.go:89] found id: ""
	I1216 04:14:46.817533 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.817542 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:46.817548 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:46.817610 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:46.845432 2088124 cri.go:89] found id: ""
	I1216 04:14:46.845455 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.845464 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:46.845473 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:46.845484 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:46.901017 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:46.901050 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:46.916980 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:46.917012 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:47.034196 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:47.019727    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.020515    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022325    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022651    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.026596    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:47.019727    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.020515    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022325    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022651    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.026596    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:47.034216 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:47.034230 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:47.060131 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:47.060167 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:49.592378 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:49.603274 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:49.603390 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:49.628592 2088124 cri.go:89] found id: ""
	I1216 04:14:49.628617 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.628626 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:49.628632 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:49.628693 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:49.654951 2088124 cri.go:89] found id: ""
	I1216 04:14:49.654974 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.654983 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:49.654990 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:49.655079 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:49.680966 2088124 cri.go:89] found id: ""
	I1216 04:14:49.680992 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.681004 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:49.681011 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:49.681077 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:49.705520 2088124 cri.go:89] found id: ""
	I1216 04:14:49.705549 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.705558 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:49.705565 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:49.705624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:49.735615 2088124 cri.go:89] found id: ""
	I1216 04:14:49.735643 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.735653 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:49.735660 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:49.735723 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:49.761693 2088124 cri.go:89] found id: ""
	I1216 04:14:49.761721 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.761730 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:49.761736 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:49.761799 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:49.786810 2088124 cri.go:89] found id: ""
	I1216 04:14:49.786852 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.786866 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:49.786875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:49.786943 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:49.815183 2088124 cri.go:89] found id: ""
	I1216 04:14:49.815209 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.815218 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:49.815236 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:49.815247 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:49.870316 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:49.870351 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:49.886698 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:49.886724 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:50.017086 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:49.989874    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.000829    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.004526    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.006532    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.011272    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:49.989874    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.000829    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.004526    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.006532    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.011272    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:50.017115 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:50.017137 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:50.046781 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:50.046822 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:52.580326 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:52.591108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:52.591184 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:52.619853 2088124 cri.go:89] found id: ""
	I1216 04:14:52.619876 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.619884 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:52.619891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:52.619973 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:52.644168 2088124 cri.go:89] found id: ""
	I1216 04:14:52.644191 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.644199 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:52.644205 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:52.644266 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:52.669818 2088124 cri.go:89] found id: ""
	I1216 04:14:52.669842 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.669850 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:52.669856 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:52.669916 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:52.695228 2088124 cri.go:89] found id: ""
	I1216 04:14:52.695252 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.695260 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:52.695267 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:52.695329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:52.720235 2088124 cri.go:89] found id: ""
	I1216 04:14:52.720260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.720269 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:52.720275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:52.720339 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:52.749551 2088124 cri.go:89] found id: ""
	I1216 04:14:52.749574 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.749582 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:52.749589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:52.749651 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:52.776351 2088124 cri.go:89] found id: ""
	I1216 04:14:52.776375 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.776383 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:52.776389 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:52.776450 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:52.805147 2088124 cri.go:89] found id: ""
	I1216 04:14:52.805175 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.805185 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:52.805195 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:52.805211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:52.831059 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:52.831098 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:52.861113 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:52.861143 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:52.916847 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:52.916883 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:52.933489 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:52.933517 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:53.043697 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:53.034203    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.035135    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.036921    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.037478    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.039232    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:53.034203    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.035135    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.036921    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.037478    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.039232    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:55.544026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:55.554861 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:55.554956 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:55.578474 2088124 cri.go:89] found id: ""
	I1216 04:14:55.578502 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.578511 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:55.578518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:55.578633 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:55.602756 2088124 cri.go:89] found id: ""
	I1216 04:14:55.602795 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.602804 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:55.602811 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:55.602900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:55.633011 2088124 cri.go:89] found id: ""
	I1216 04:14:55.633035 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.633043 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:55.633049 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:55.633136 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:55.658213 2088124 cri.go:89] found id: ""
	I1216 04:14:55.658247 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.658257 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:55.658280 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:55.658411 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:55.683154 2088124 cri.go:89] found id: ""
	I1216 04:14:55.683183 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.683201 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:55.683208 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:55.683280 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:55.707894 2088124 cri.go:89] found id: ""
	I1216 04:14:55.707968 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.707991 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:55.708010 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:55.708099 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:55.732419 2088124 cri.go:89] found id: ""
	I1216 04:14:55.732506 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.732531 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:55.732543 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:55.732624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:55.760911 2088124 cri.go:89] found id: ""
	I1216 04:14:55.760981 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.761007 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:55.761023 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:55.761038 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:55.817437 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:55.817473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:55.833374 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:55.833405 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:55.898151 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:55.890310    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.890838    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892319    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892862    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.894354    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:55.890310    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.890838    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892319    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892862    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.894354    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:55.898175 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:55.898195 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:55.923776 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:55.923810 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:58.462512 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:58.474113 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:58.474190 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:58.500558 2088124 cri.go:89] found id: ""
	I1216 04:14:58.500581 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.500590 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:58.500597 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:58.500659 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:58.525784 2088124 cri.go:89] found id: ""
	I1216 04:14:58.525809 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.525818 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:58.525824 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:58.525883 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:58.550534 2088124 cri.go:89] found id: ""
	I1216 04:14:58.550560 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.550570 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:58.550577 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:58.550634 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:58.577140 2088124 cri.go:89] found id: ""
	I1216 04:14:58.577167 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.577177 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:58.577184 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:58.577244 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:58.605864 2088124 cri.go:89] found id: ""
	I1216 04:14:58.605890 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.605904 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:58.605911 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:58.605975 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:58.634121 2088124 cri.go:89] found id: ""
	I1216 04:14:58.634152 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.634161 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:58.634168 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:58.634239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:58.660170 2088124 cri.go:89] found id: ""
	I1216 04:14:58.660198 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.660207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:58.660213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:58.660273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:58.685306 2088124 cri.go:89] found id: ""
	I1216 04:14:58.685333 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.685342 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:58.685351 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:58.685364 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:58.741326 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:58.741362 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:58.757562 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:58.757594 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:58.823813 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:58.815321    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.815799    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817390    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817823    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.819361    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:58.815321    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.815799    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817390    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817823    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.819361    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:58.823838 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:58.823854 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:58.849684 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:58.849722 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:01.379834 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:01.391065 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:01.391142 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:01.417501 2088124 cri.go:89] found id: ""
	I1216 04:15:01.417579 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.417602 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:01.417643 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:01.417737 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:01.448334 2088124 cri.go:89] found id: ""
	I1216 04:15:01.448360 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.448368 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:01.448375 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:01.448447 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:01.476980 2088124 cri.go:89] found id: ""
	I1216 04:15:01.477006 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.477015 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:01.477022 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:01.477108 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:01.501087 2088124 cri.go:89] found id: ""
	I1216 04:15:01.501110 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.501118 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:01.501125 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:01.501183 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:01.526116 2088124 cri.go:89] found id: ""
	I1216 04:15:01.526139 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.526147 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:01.526154 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:01.526217 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:01.552211 2088124 cri.go:89] found id: ""
	I1216 04:15:01.552234 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.552249 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:01.552255 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:01.552314 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:01.579190 2088124 cri.go:89] found id: ""
	I1216 04:15:01.579220 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.579229 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:01.579243 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:01.579362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:01.606084 2088124 cri.go:89] found id: ""
	I1216 04:15:01.606108 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.606118 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:01.606127 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:01.606139 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:01.638251 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:01.638281 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:01.698103 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:01.698145 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:01.714771 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:01.714858 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:01.780079 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:01.771727    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.772399    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774006    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774479    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.776016    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:01.771727    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.772399    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774006    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774479    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.776016    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:01.780150 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:01.780177 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:04.307354 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:04.318980 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:04.319082 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:04.348465 2088124 cri.go:89] found id: ""
	I1216 04:15:04.348496 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.348506 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:04.348513 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:04.348593 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:04.374442 2088124 cri.go:89] found id: ""
	I1216 04:15:04.374467 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.374476 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:04.374485 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:04.374543 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:04.401352 2088124 cri.go:89] found id: ""
	I1216 04:15:04.401376 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.401384 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:04.401390 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:04.401448 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:04.427946 2088124 cri.go:89] found id: ""
	I1216 04:15:04.427969 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.427978 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:04.427984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:04.428044 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:04.453439 2088124 cri.go:89] found id: ""
	I1216 04:15:04.453474 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.453483 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:04.453490 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:04.453549 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:04.478368 2088124 cri.go:89] found id: ""
	I1216 04:15:04.478395 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.478403 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:04.478409 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:04.478467 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:04.502274 2088124 cri.go:89] found id: ""
	I1216 04:15:04.502303 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.502312 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:04.502318 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:04.502379 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:04.526440 2088124 cri.go:89] found id: ""
	I1216 04:15:04.526467 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.526475 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:04.526484 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:04.526494 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:04.581559 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:04.581596 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:04.597786 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:04.597815 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:04.661194 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:04.653077    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.653620    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655229    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655719    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.657352    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:04.653077    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.653620    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655229    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655719    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.657352    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:04.661217 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:04.661230 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:04.686508 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:04.686544 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:07.214226 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:07.226828 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:07.226904 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:07.267774 2088124 cri.go:89] found id: ""
	I1216 04:15:07.267805 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.267814 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:07.267820 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:07.267880 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:07.293953 2088124 cri.go:89] found id: ""
	I1216 04:15:07.293980 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.293988 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:07.293994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:07.294052 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:07.317542 2088124 cri.go:89] found id: ""
	I1216 04:15:07.317568 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.317577 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:07.317583 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:07.317695 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:07.351422 2088124 cri.go:89] found id: ""
	I1216 04:15:07.351449 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.351458 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:07.351465 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:07.351552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:07.376043 2088124 cri.go:89] found id: ""
	I1216 04:15:07.376069 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.376092 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:07.376121 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:07.376204 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:07.400719 2088124 cri.go:89] found id: ""
	I1216 04:15:07.400749 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.400758 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:07.400765 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:07.400849 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:07.425726 2088124 cri.go:89] found id: ""
	I1216 04:15:07.425754 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.425763 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:07.425769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:07.425833 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:07.450385 2088124 cri.go:89] found id: ""
	I1216 04:15:07.450413 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.450422 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:07.450431 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:07.450444 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:07.482416 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:07.482446 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:07.543525 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:07.543569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:07.559963 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:07.559991 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:07.626193 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:07.617713    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.618478    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620010    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620349    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.621831    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:07.617713    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.618478    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620010    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620349    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.621831    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:07.626217 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:07.626233 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:10.151663 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:10.162850 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:10.162922 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:10.218459 2088124 cri.go:89] found id: ""
	I1216 04:15:10.218492 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.218502 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:10.218508 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:10.218581 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:10.266687 2088124 cri.go:89] found id: ""
	I1216 04:15:10.266716 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.266726 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:10.266732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:10.266794 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:10.297579 2088124 cri.go:89] found id: ""
	I1216 04:15:10.297607 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.297616 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:10.297623 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:10.297682 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:10.327612 2088124 cri.go:89] found id: ""
	I1216 04:15:10.327637 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.327646 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:10.327652 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:10.327710 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:10.352049 2088124 cri.go:89] found id: ""
	I1216 04:15:10.352073 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.352082 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:10.352088 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:10.352150 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:10.380981 2088124 cri.go:89] found id: ""
	I1216 04:15:10.381005 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.381013 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:10.381020 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:10.381083 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:10.405173 2088124 cri.go:89] found id: ""
	I1216 04:15:10.405198 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.405207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:10.405213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:10.405271 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:10.430194 2088124 cri.go:89] found id: ""
	I1216 04:15:10.430219 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.430248 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:10.430259 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:10.430272 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:10.486344 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:10.486381 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:10.502248 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:10.502278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:10.568856 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:10.561184    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.561736    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563232    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563538    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.565000    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:10.561184    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.561736    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563232    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563538    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.565000    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:10.568879 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:10.568893 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:10.595314 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:10.595349 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:13.125478 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:13.136862 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:13.136937 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:13.162398 2088124 cri.go:89] found id: ""
	I1216 04:15:13.162432 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.162442 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:13.162449 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:13.162512 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:13.213417 2088124 cri.go:89] found id: ""
	I1216 04:15:13.213443 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.213451 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:13.213457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:13.213515 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:13.265047 2088124 cri.go:89] found id: ""
	I1216 04:15:13.265074 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.265082 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:13.265089 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:13.265146 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:13.295404 2088124 cri.go:89] found id: ""
	I1216 04:15:13.295431 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.295442 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:13.295448 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:13.295510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:13.320244 2088124 cri.go:89] found id: ""
	I1216 04:15:13.320272 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.320281 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:13.320288 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:13.320347 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:13.343989 2088124 cri.go:89] found id: ""
	I1216 04:15:13.344013 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.344022 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:13.344028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:13.344088 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:13.367813 2088124 cri.go:89] found id: ""
	I1216 04:15:13.367838 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.367847 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:13.367854 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:13.367914 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:13.391747 2088124 cri.go:89] found id: ""
	I1216 04:15:13.391772 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.391782 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:13.391791 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:13.391802 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:13.416337 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:13.416373 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:13.443257 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:13.443286 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:13.501977 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:13.502016 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:13.517698 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:13.517730 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:13.580974 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:13.572384    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.573100    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.574739    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.575073    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.576736    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:13.572384    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.573100    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.574739    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.575073    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.576736    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:16.081274 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:16.092248 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:16.092325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:16.118104 2088124 cri.go:89] found id: ""
	I1216 04:15:16.118128 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.118138 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:16.118145 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:16.118207 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:16.148494 2088124 cri.go:89] found id: ""
	I1216 04:15:16.148519 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.148529 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:16.148535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:16.148600 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:16.177106 2088124 cri.go:89] found id: ""
	I1216 04:15:16.177133 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.177142 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:16.177148 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:16.177209 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:16.225478 2088124 cri.go:89] found id: ""
	I1216 04:15:16.225512 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.225521 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:16.225528 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:16.225601 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:16.263615 2088124 cri.go:89] found id: ""
	I1216 04:15:16.263642 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.263651 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:16.263657 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:16.263717 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:16.288816 2088124 cri.go:89] found id: ""
	I1216 04:15:16.288840 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.288849 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:16.288855 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:16.288915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:16.313866 2088124 cri.go:89] found id: ""
	I1216 04:15:16.313899 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.313909 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:16.313915 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:16.313986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:16.338822 2088124 cri.go:89] found id: ""
	I1216 04:15:16.338847 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.338865 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:16.338874 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:16.338886 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:16.397500 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:16.397535 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:16.413373 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:16.413401 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:16.481369 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:16.473361    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.474033    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475539    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475954    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.477408    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:16.473361    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.474033    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475539    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475954    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.477408    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:16.481391 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:16.481404 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:16.506768 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:16.506801 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:19.036905 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:19.047523 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:19.047594 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:19.071924 2088124 cri.go:89] found id: ""
	I1216 04:15:19.071947 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.071956 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:19.071963 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:19.072020 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:19.096694 2088124 cri.go:89] found id: ""
	I1216 04:15:19.096716 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.096736 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:19.096742 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:19.096808 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:19.122106 2088124 cri.go:89] found id: ""
	I1216 04:15:19.122129 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.122137 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:19.122144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:19.122204 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:19.151300 2088124 cri.go:89] found id: ""
	I1216 04:15:19.151327 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.151337 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:19.151346 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:19.151407 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:19.176879 2088124 cri.go:89] found id: ""
	I1216 04:15:19.176906 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.176915 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:19.176921 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:19.176982 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:19.248606 2088124 cri.go:89] found id: ""
	I1216 04:15:19.248637 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.248646 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:19.248654 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:19.248720 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:19.284067 2088124 cri.go:89] found id: ""
	I1216 04:15:19.284095 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.284105 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:19.284111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:19.284179 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:19.309536 2088124 cri.go:89] found id: ""
	I1216 04:15:19.309564 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.309573 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:19.309583 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:19.309595 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:19.336019 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:19.336059 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:19.363926 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:19.363997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:19.420745 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:19.420779 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:19.437274 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:19.437306 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:19.501939 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:19.493862    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.494691    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496168    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496603    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.498063    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:19.493862    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.494691    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496168    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496603    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.498063    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:22.002831 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:22.019000 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:22.019099 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:22.045729 2088124 cri.go:89] found id: ""
	I1216 04:15:22.045753 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.045762 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:22.045769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:22.045831 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:22.073468 2088124 cri.go:89] found id: ""
	I1216 04:15:22.073494 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.073504 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:22.073511 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:22.073572 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:22.099372 2088124 cri.go:89] found id: ""
	I1216 04:15:22.099397 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.099407 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:22.099413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:22.099475 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:22.124283 2088124 cri.go:89] found id: ""
	I1216 04:15:22.124358 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.124371 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:22.124378 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:22.124509 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:22.149430 2088124 cri.go:89] found id: ""
	I1216 04:15:22.149456 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.149466 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:22.149472 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:22.149532 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:22.179789 2088124 cri.go:89] found id: ""
	I1216 04:15:22.179813 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.179822 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:22.179829 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:22.179920 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:22.233299 2088124 cri.go:89] found id: ""
	I1216 04:15:22.233333 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.233342 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:22.233380 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:22.233495 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:22.281260 2088124 cri.go:89] found id: ""
	I1216 04:15:22.281287 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.281296 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:22.281305 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:22.281354 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:22.299880 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:22.299908 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:22.370389 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:22.359272    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.360819    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.361789    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.363665    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.365341    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:22.359272    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.360819    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.361789    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.363665    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.365341    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:22.370413 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:22.370427 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:22.395585 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:22.395618 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:22.423071 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:22.423103 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:24.979909 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:24.990414 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:24.990487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:25.022894 2088124 cri.go:89] found id: ""
	I1216 04:15:25.022933 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.022942 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:25.022950 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:25.023035 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:25.057555 2088124 cri.go:89] found id: ""
	I1216 04:15:25.057592 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.057602 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:25.057609 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:25.057674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:25.084421 2088124 cri.go:89] found id: ""
	I1216 04:15:25.084446 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.084455 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:25.084462 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:25.084534 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:25.112223 2088124 cri.go:89] found id: ""
	I1216 04:15:25.112249 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.112258 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:25.112266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:25.112340 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:25.138162 2088124 cri.go:89] found id: ""
	I1216 04:15:25.138186 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.138195 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:25.138202 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:25.138262 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:25.165660 2088124 cri.go:89] found id: ""
	I1216 04:15:25.165689 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.165698 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:25.165705 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:25.165775 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:25.213233 2088124 cri.go:89] found id: ""
	I1216 04:15:25.213260 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.213269 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:25.213275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:25.213333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:25.254540 2088124 cri.go:89] found id: ""
	I1216 04:15:25.254567 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.254576 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:25.254586 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:25.254599 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:25.290970 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:25.290997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:25.349010 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:25.349046 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:25.364592 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:25.364626 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:25.428643 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:25.420082    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.420822    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.422550    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.423262    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.424839    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:25.420082    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.420822    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.422550    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.423262    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.424839    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:25.428666 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:25.428680 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:27.954878 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:27.965363 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:27.965430 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:27.991310 2088124 cri.go:89] found id: ""
	I1216 04:15:27.991338 2088124 logs.go:282] 0 containers: []
	W1216 04:15:27.991347 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:27.991354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:27.991416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:28.017496 2088124 cri.go:89] found id: ""
	I1216 04:15:28.017519 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.017528 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:28.017535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:28.017600 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:28.043243 2088124 cri.go:89] found id: ""
	I1216 04:15:28.043267 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.043276 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:28.043282 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:28.043349 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:28.070592 2088124 cri.go:89] found id: ""
	I1216 04:15:28.070620 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.070629 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:28.070635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:28.070705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:28.096408 2088124 cri.go:89] found id: ""
	I1216 04:15:28.096430 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.096439 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:28.096446 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:28.096517 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:28.122523 2088124 cri.go:89] found id: ""
	I1216 04:15:28.122547 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.122556 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:28.122563 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:28.122627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:28.148233 2088124 cri.go:89] found id: ""
	I1216 04:15:28.148256 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.148264 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:28.148270 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:28.148335 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:28.174686 2088124 cri.go:89] found id: ""
	I1216 04:15:28.174715 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.174724 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:28.174733 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:28.174745 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:28.248922 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:28.249042 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:28.270319 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:28.270345 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:28.344544 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:28.335802    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.336388    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338081    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338450    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.339995    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:28.335802    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.336388    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338081    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338450    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.339995    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:28.344568 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:28.344583 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:28.370869 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:28.370905 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:30.901180 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:30.914236 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:30.914316 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:30.943226 2088124 cri.go:89] found id: ""
	I1216 04:15:30.943247 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.943255 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:30.943262 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:30.943320 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:30.969548 2088124 cri.go:89] found id: ""
	I1216 04:15:30.969573 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.969581 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:30.969588 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:30.969648 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:30.996727 2088124 cri.go:89] found id: ""
	I1216 04:15:30.996750 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.996759 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:30.996765 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:30.996823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:31.023099 2088124 cri.go:89] found id: ""
	I1216 04:15:31.023125 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.023133 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:31.023140 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:31.023202 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:31.052543 2088124 cri.go:89] found id: ""
	I1216 04:15:31.052568 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.052577 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:31.052584 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:31.052646 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:31.079096 2088124 cri.go:89] found id: ""
	I1216 04:15:31.079119 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.079128 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:31.079134 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:31.079197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:31.108706 2088124 cri.go:89] found id: ""
	I1216 04:15:31.108777 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.108801 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:31.108815 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:31.108894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:31.138097 2088124 cri.go:89] found id: ""
	I1216 04:15:31.138122 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.138130 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:31.138140 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:31.138152 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:31.163977 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:31.164066 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:31.220358 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:31.220432 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:31.291830 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:31.291912 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:31.307651 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:31.307678 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:31.376724 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:31.367014    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369142    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369906    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371443    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371922    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:31.367014    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369142    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369906    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371443    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371922    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:33.876969 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:33.887678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:33.887751 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:33.911475 2088124 cri.go:89] found id: ""
	I1216 04:15:33.911503 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.911513 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:33.911520 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:33.911581 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:33.936829 2088124 cri.go:89] found id: ""
	I1216 04:15:33.936852 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.936861 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:33.936866 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:33.936924 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:33.961061 2088124 cri.go:89] found id: ""
	I1216 04:15:33.961085 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.961094 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:33.961101 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:33.961168 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:33.985053 2088124 cri.go:89] found id: ""
	I1216 04:15:33.985078 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.985086 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:33.985093 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:33.985154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:34.015083 2088124 cri.go:89] found id: ""
	I1216 04:15:34.015112 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.015122 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:34.015129 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:34.015191 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:34.040899 2088124 cri.go:89] found id: ""
	I1216 04:15:34.040922 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.040930 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:34.040936 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:34.041001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:34.066663 2088124 cri.go:89] found id: ""
	I1216 04:15:34.066744 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.066771 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:34.066792 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:34.066877 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:34.092631 2088124 cri.go:89] found id: ""
	I1216 04:15:34.092708 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.092733 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:34.092749 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:34.092762 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:34.151180 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:34.151218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:34.167672 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:34.167704 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:34.288358 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:34.277084    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.277708    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.280379    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282393    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282865    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:34.277084    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.277708    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.280379    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282393    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282865    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:34.288382 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:34.288395 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:34.313627 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:34.313660 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:36.841874 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:36.852005 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:36.852078 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:36.875574 2088124 cri.go:89] found id: ""
	I1216 04:15:36.875598 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.875608 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:36.875614 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:36.875674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:36.904945 2088124 cri.go:89] found id: ""
	I1216 04:15:36.905021 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.905045 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:36.905057 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:36.905119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:36.930221 2088124 cri.go:89] found id: ""
	I1216 04:15:36.930249 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.930259 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:36.930266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:36.930326 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:36.955843 2088124 cri.go:89] found id: ""
	I1216 04:15:36.955870 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.955880 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:36.955887 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:36.955947 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:36.979492 2088124 cri.go:89] found id: ""
	I1216 04:15:36.979557 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.979583 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:36.979596 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:36.979667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:37.004015 2088124 cri.go:89] found id: ""
	I1216 04:15:37.004045 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.004056 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:37.004064 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:37.004144 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:37.033766 2088124 cri.go:89] found id: ""
	I1216 04:15:37.033841 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.033868 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:37.033887 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:37.033980 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:37.058994 2088124 cri.go:89] found id: ""
	I1216 04:15:37.059087 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.059115 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:37.059132 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:37.059146 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:37.121921 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:37.113226    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.113740    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115405    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115894    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.117543    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:37.113226    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.113740    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115405    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115894    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.117543    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:37.121943 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:37.121956 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:37.148246 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:37.148285 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:37.178974 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:37.179077 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:37.249870 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:37.249909 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:39.789446 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:39.800133 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:39.800214 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:39.824765 2088124 cri.go:89] found id: ""
	I1216 04:15:39.824794 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.824803 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:39.824810 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:39.824872 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:39.849338 2088124 cri.go:89] found id: ""
	I1216 04:15:39.849362 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.849370 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:39.849377 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:39.849435 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:39.873874 2088124 cri.go:89] found id: ""
	I1216 04:15:39.873902 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.873911 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:39.873917 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:39.873976 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:39.899109 2088124 cri.go:89] found id: ""
	I1216 04:15:39.899134 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.899143 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:39.899149 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:39.899210 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:39.924102 2088124 cri.go:89] found id: ""
	I1216 04:15:39.924128 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.924137 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:39.924143 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:39.924208 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:39.949033 2088124 cri.go:89] found id: ""
	I1216 04:15:39.949065 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.949074 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:39.949082 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:39.949144 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:39.975169 2088124 cri.go:89] found id: ""
	I1216 04:15:39.975198 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.975207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:39.975213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:39.975273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:40.028056 2088124 cri.go:89] found id: ""
	I1216 04:15:40.028085 2088124 logs.go:282] 0 containers: []
	W1216 04:15:40.028094 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:40.028104 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:40.028116 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:40.085250 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:40.085285 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:40.101589 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:40.101621 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:40.174562 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:40.165816    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.166569    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.167348    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.168955    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.169429    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:40.165816    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.166569    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.167348    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.168955    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.169429    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:40.174584 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:40.174599 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:40.202884 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:40.202920 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:42.752364 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:42.763300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:42.763369 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:42.792503 2088124 cri.go:89] found id: ""
	I1216 04:15:42.792529 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.792539 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:42.792545 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:42.792608 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:42.821201 2088124 cri.go:89] found id: ""
	I1216 04:15:42.821226 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.821235 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:42.821242 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:42.821304 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:42.847075 2088124 cri.go:89] found id: ""
	I1216 04:15:42.847102 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.847110 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:42.847117 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:42.847179 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:42.871486 2088124 cri.go:89] found id: ""
	I1216 04:15:42.871510 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.871519 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:42.871525 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:42.871589 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:42.896375 2088124 cri.go:89] found id: ""
	I1216 04:15:42.896402 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.896412 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:42.896418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:42.896505 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:42.921735 2088124 cri.go:89] found id: ""
	I1216 04:15:42.921811 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.921844 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:42.921865 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:42.921950 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:42.950925 2088124 cri.go:89] found id: ""
	I1216 04:15:42.950947 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.950955 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:42.950961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:42.951019 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:42.975785 2088124 cri.go:89] found id: ""
	I1216 04:15:42.975809 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.975817 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:42.975826 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:42.975840 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:42.991441 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:42.991473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:43.054494 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:43.046352    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.047054    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.048590    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.049073    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.050630    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:43.046352    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.047054    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.048590    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.049073    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.050630    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:43.054518 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:43.054532 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:43.079941 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:43.079979 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:43.107712 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:43.107738 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:45.663276 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:45.674206 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:45.674325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:45.698711 2088124 cri.go:89] found id: ""
	I1216 04:15:45.698736 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.698745 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:45.698752 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:45.698822 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:45.723389 2088124 cri.go:89] found id: ""
	I1216 04:15:45.723413 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.723422 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:45.723428 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:45.723494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:45.748842 2088124 cri.go:89] found id: ""
	I1216 04:15:45.748919 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.748935 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:45.748942 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:45.749002 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:45.777156 2088124 cri.go:89] found id: ""
	I1216 04:15:45.777236 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.777251 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:45.777265 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:45.777327 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:45.802462 2088124 cri.go:89] found id: ""
	I1216 04:15:45.802494 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.802503 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:45.802510 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:45.802583 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:45.829417 2088124 cri.go:89] found id: ""
	I1216 04:15:45.829442 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.829451 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:45.829458 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:45.829521 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:45.854934 2088124 cri.go:89] found id: ""
	I1216 04:15:45.854962 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.854971 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:45.854977 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:45.855095 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:45.879249 2088124 cri.go:89] found id: ""
	I1216 04:15:45.879272 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.879280 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:45.879289 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:45.879301 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:45.895118 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:45.895155 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:45.958262 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:45.949768    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.950592    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952181    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952681    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.954304    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:45.949768    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.950592    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952181    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952681    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.954304    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:45.958284 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:45.958298 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:45.984226 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:45.984260 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:46.015984 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:46.016011 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:48.576053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:48.586849 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:48.586923 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:48.612368 2088124 cri.go:89] found id: ""
	I1216 04:15:48.612394 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.612404 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:48.612410 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:48.612470 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:48.641259 2088124 cri.go:89] found id: ""
	I1216 04:15:48.641288 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.641297 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:48.641304 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:48.641368 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:48.665587 2088124 cri.go:89] found id: ""
	I1216 04:15:48.665614 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.665624 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:48.665629 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:48.665704 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:48.691123 2088124 cri.go:89] found id: ""
	I1216 04:15:48.691151 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.691160 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:48.691167 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:48.691227 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:48.716275 2088124 cri.go:89] found id: ""
	I1216 04:15:48.716304 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.716314 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:48.716320 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:48.716381 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:48.747209 2088124 cri.go:89] found id: ""
	I1216 04:15:48.747236 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.747244 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:48.747250 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:48.747312 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:48.776967 2088124 cri.go:89] found id: ""
	I1216 04:15:48.776991 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.777001 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:48.777010 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:48.777071 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:48.800940 2088124 cri.go:89] found id: ""
	I1216 04:15:48.800965 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.800975 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:48.800985 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:48.800997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:48.856499 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:48.856533 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:48.872208 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:48.872239 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:48.945493 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:48.936737    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.937621    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939381    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939979    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.941612    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:48.936737    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.937621    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939381    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939979    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.941612    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:48.945516 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:48.945529 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:48.970477 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:48.970510 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:51.499166 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:51.515506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:51.515579 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:51.540271 2088124 cri.go:89] found id: ""
	I1216 04:15:51.540297 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.540306 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:51.540313 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:51.540373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:51.564213 2088124 cri.go:89] found id: ""
	I1216 04:15:51.564235 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.564244 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:51.564250 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:51.564309 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:51.592901 2088124 cri.go:89] found id: ""
	I1216 04:15:51.592924 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.592933 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:51.592939 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:51.593001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:51.617803 2088124 cri.go:89] found id: ""
	I1216 04:15:51.617831 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.617840 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:51.617847 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:51.617906 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:51.643791 2088124 cri.go:89] found id: ""
	I1216 04:15:51.643814 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.643822 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:51.643830 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:51.643894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:51.669293 2088124 cri.go:89] found id: ""
	I1216 04:15:51.669324 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.669335 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:51.669345 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:51.669416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:51.697129 2088124 cri.go:89] found id: ""
	I1216 04:15:51.697155 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.697164 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:51.697170 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:51.697235 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:51.725605 2088124 cri.go:89] found id: ""
	I1216 04:15:51.725631 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.725640 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:51.725650 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:51.725664 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:51.781941 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:51.781976 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:51.798346 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:51.798372 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:51.861456 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:51.853947    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.854888    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.855930    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.856728    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.857532    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:51.853947    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.854888    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.855930    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.856728    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.857532    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:51.861478 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:51.861491 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:51.886476 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:51.886511 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:54.421185 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:54.432641 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:54.432721 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:54.487902 2088124 cri.go:89] found id: ""
	I1216 04:15:54.487936 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.487945 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:54.487952 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:54.488026 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:54.530347 2088124 cri.go:89] found id: ""
	I1216 04:15:54.530372 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.530381 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:54.530387 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:54.530450 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:54.558305 2088124 cri.go:89] found id: ""
	I1216 04:15:54.558339 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.558348 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:54.558354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:54.558423 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:54.584247 2088124 cri.go:89] found id: ""
	I1216 04:15:54.584271 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.584280 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:54.584286 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:54.584347 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:54.608497 2088124 cri.go:89] found id: ""
	I1216 04:15:54.608526 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.608536 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:54.608542 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:54.608601 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:54.634256 2088124 cri.go:89] found id: ""
	I1216 04:15:54.634283 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.634293 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:54.634301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:54.634360 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:54.659092 2088124 cri.go:89] found id: ""
	I1216 04:15:54.659132 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.659141 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:54.659148 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:54.659210 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:54.683797 2088124 cri.go:89] found id: ""
	I1216 04:15:54.683823 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.683832 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:54.683841 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:54.683852 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:54.713212 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:54.713238 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:54.769163 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:54.769199 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:54.784702 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:54.784742 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:54.855379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:54.846290    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.847296    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.848173    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.849670    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.850187    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:54.846290    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.847296    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.848173    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.849670    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.850187    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:54.855412 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:54.855425 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:57.382388 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:57.393144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:57.393234 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:57.418375 2088124 cri.go:89] found id: ""
	I1216 04:15:57.418443 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.418467 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:57.418486 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:57.418574 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:57.495590 2088124 cri.go:89] found id: ""
	I1216 04:15:57.495668 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.495694 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:57.495716 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:57.495813 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:57.536762 2088124 cri.go:89] found id: ""
	I1216 04:15:57.536786 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.536795 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:57.536801 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:57.536859 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:57.573379 2088124 cri.go:89] found id: ""
	I1216 04:15:57.573403 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.573412 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:57.573418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:57.573488 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:57.601415 2088124 cri.go:89] found id: ""
	I1216 04:15:57.601439 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.601447 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:57.601454 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:57.601514 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:57.625828 2088124 cri.go:89] found id: ""
	I1216 04:15:57.625852 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.625860 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:57.625866 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:57.625932 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:57.651508 2088124 cri.go:89] found id: ""
	I1216 04:15:57.651534 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.651543 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:57.651549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:57.651609 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:57.678194 2088124 cri.go:89] found id: ""
	I1216 04:15:57.678228 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.678242 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:57.678252 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:57.678287 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:57.733879 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:57.733916 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:57.750633 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:57.750661 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:57.828100 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:57.813970    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.814594    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.815982    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.822718    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.823836    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:57.813970    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.814594    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.815982    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.822718    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.823836    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:57.828131 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:57.828145 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:57.855013 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:57.855070 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:00.384284 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:00.398189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:00.398285 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:00.442307 2088124 cri.go:89] found id: ""
	I1216 04:16:00.442337 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.442347 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:00.442404 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:00.442487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:00.505962 2088124 cri.go:89] found id: ""
	I1216 04:16:00.505986 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.505994 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:00.506001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:00.506064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:00.548862 2088124 cri.go:89] found id: ""
	I1216 04:16:00.548940 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.548965 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:00.548984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:00.549098 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:00.576916 2088124 cri.go:89] found id: ""
	I1216 04:16:00.576939 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.576948 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:00.576954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:00.577013 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:00.602863 2088124 cri.go:89] found id: ""
	I1216 04:16:00.602891 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.602901 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:00.602907 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:00.602971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:00.628659 2088124 cri.go:89] found id: ""
	I1216 04:16:00.628688 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.628698 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:00.628705 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:00.628771 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:00.654429 2088124 cri.go:89] found id: ""
	I1216 04:16:00.654466 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.654475 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:00.654481 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:00.654556 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:00.679835 2088124 cri.go:89] found id: ""
	I1216 04:16:00.679863 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.679877 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:00.679890 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:00.679901 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:00.738456 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:00.738501 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:00.754802 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:00.754838 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:00.824660 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:00.815557    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.816379    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818197    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818825    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.820610    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:00.815557    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.816379    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818197    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818825    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.820610    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:00.824683 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:00.824698 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:00.850142 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:00.850176 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:03.377190 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:03.388732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:03.388827 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:03.417059 2088124 cri.go:89] found id: ""
	I1216 04:16:03.417082 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.417090 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:03.417096 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:03.417157 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:03.473568 2088124 cri.go:89] found id: ""
	I1216 04:16:03.473591 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.473599 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:03.473605 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:03.473676 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:03.510076 2088124 cri.go:89] found id: ""
	I1216 04:16:03.510097 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.510105 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:03.510111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:03.510170 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:03.546041 2088124 cri.go:89] found id: ""
	I1216 04:16:03.546063 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.546072 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:03.546086 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:03.546148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:03.574587 2088124 cri.go:89] found id: ""
	I1216 04:16:03.574672 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.574704 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:03.574747 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:03.574847 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:03.600940 2088124 cri.go:89] found id: ""
	I1216 04:16:03.600964 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.600973 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:03.600979 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:03.601041 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:03.626500 2088124 cri.go:89] found id: ""
	I1216 04:16:03.626524 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.626537 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:03.626544 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:03.626613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:03.651278 2088124 cri.go:89] found id: ""
	I1216 04:16:03.651345 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.651368 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:03.651386 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:03.651401 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:03.713437 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:03.704982    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.705525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707260    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707865    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.709525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:03.704982    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.705525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707260    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707865    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.709525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:03.713461 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:03.713476 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:03.739122 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:03.739183 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:03.769731 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:03.769761 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:03.825343 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:03.825379 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:06.341217 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:06.351622 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:06.351695 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:06.377192 2088124 cri.go:89] found id: ""
	I1216 04:16:06.377220 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.377229 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:06.377236 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:06.377298 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:06.407491 2088124 cri.go:89] found id: ""
	I1216 04:16:06.407516 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.407524 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:06.407530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:06.407587 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:06.432854 2088124 cri.go:89] found id: ""
	I1216 04:16:06.432881 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.432890 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:06.432896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:06.432954 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:06.508461 2088124 cri.go:89] found id: ""
	I1216 04:16:06.508483 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.508502 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:06.508510 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:06.508572 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:06.537008 2088124 cri.go:89] found id: ""
	I1216 04:16:06.537031 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.537039 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:06.537045 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:06.537102 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:06.563652 2088124 cri.go:89] found id: ""
	I1216 04:16:06.563723 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.563740 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:06.563747 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:06.563841 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:06.589523 2088124 cri.go:89] found id: ""
	I1216 04:16:06.589599 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.589623 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:06.589642 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:06.589725 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:06.615510 2088124 cri.go:89] found id: ""
	I1216 04:16:06.615577 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.615599 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:06.615623 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:06.615655 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:06.670726 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:06.670760 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:06.689463 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:06.689495 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:06.755339 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:06.747246    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.748070    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.749790    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.750091    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.751596    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:06.747246    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.748070    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.749790    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.750091    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.751596    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:06.755362 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:06.755375 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:06.780884 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:06.780917 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:09.313406 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:09.323603 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:09.323673 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:09.351600 2088124 cri.go:89] found id: ""
	I1216 04:16:09.351624 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.351632 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:09.351639 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:09.351699 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:09.375845 2088124 cri.go:89] found id: ""
	I1216 04:16:09.375869 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.375878 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:09.375885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:09.375950 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:09.400733 2088124 cri.go:89] found id: ""
	I1216 04:16:09.400756 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.400764 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:09.400770 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:09.400830 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:09.423762 2088124 cri.go:89] found id: ""
	I1216 04:16:09.423785 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.423793 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:09.423799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:09.423856 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:09.457898 2088124 cri.go:89] found id: ""
	I1216 04:16:09.457971 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.457993 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:09.458014 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:09.458132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:09.507416 2088124 cri.go:89] found id: ""
	I1216 04:16:09.507445 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.507453 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:09.507459 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:09.507518 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:09.542968 2088124 cri.go:89] found id: ""
	I1216 04:16:09.543084 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.543115 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:09.543169 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:09.543294 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:09.568289 2088124 cri.go:89] found id: ""
	I1216 04:16:09.568313 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.568321 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:09.568331 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:09.568343 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:09.630690 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:09.621816    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.622488    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624212    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624769    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.626372    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:09.621816    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.622488    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624212    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624769    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.626372    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:09.630716 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:09.630732 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:09.656388 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:09.656424 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:09.684126 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:09.684152 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:09.742624 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:09.742662 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:12.259263 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:12.269891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:12.269959 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:12.294506 2088124 cri.go:89] found id: ""
	I1216 04:16:12.294532 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.294541 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:12.294546 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:12.294628 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:12.318895 2088124 cri.go:89] found id: ""
	I1216 04:16:12.318924 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.318932 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:12.318938 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:12.318994 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:12.344134 2088124 cri.go:89] found id: ""
	I1216 04:16:12.344158 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.344167 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:12.344173 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:12.344234 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:12.368552 2088124 cri.go:89] found id: ""
	I1216 04:16:12.368574 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.368583 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:12.368590 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:12.368654 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:12.396826 2088124 cri.go:89] found id: ""
	I1216 04:16:12.396854 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.396863 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:12.396870 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:12.396931 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:12.422048 2088124 cri.go:89] found id: ""
	I1216 04:16:12.422076 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.422085 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:12.422092 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:12.422153 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:12.485647 2088124 cri.go:89] found id: ""
	I1216 04:16:12.485669 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.485677 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:12.485684 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:12.485750 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:12.529516 2088124 cri.go:89] found id: ""
	I1216 04:16:12.529539 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.529547 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:12.529557 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:12.529569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:12.545674 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:12.545705 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:12.608192 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:12.599988    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.600547    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602099    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602578    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.604093    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:12.599988    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.600547    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602099    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602578    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.604093    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:12.608257 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:12.608279 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:12.633428 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:12.633463 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:12.661070 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:12.661097 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:15.217877 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:15.228678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:15.228748 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:15.253119 2088124 cri.go:89] found id: ""
	I1216 04:16:15.253143 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.253152 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:15.253158 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:15.253220 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:15.285145 2088124 cri.go:89] found id: ""
	I1216 04:16:15.285168 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.285177 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:15.285183 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:15.285243 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:15.311311 2088124 cri.go:89] found id: ""
	I1216 04:16:15.311339 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.311348 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:15.311355 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:15.311416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:15.336241 2088124 cri.go:89] found id: ""
	I1216 04:16:15.336271 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.336286 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:15.336293 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:15.336354 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:15.362230 2088124 cri.go:89] found id: ""
	I1216 04:16:15.362258 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.362268 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:15.362275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:15.362334 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:15.387340 2088124 cri.go:89] found id: ""
	I1216 04:16:15.387362 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.387371 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:15.387377 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:15.387437 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:15.412173 2088124 cri.go:89] found id: ""
	I1216 04:16:15.412201 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.412210 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:15.412217 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:15.412281 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:15.454276 2088124 cri.go:89] found id: ""
	I1216 04:16:15.454354 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.454378 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:15.454404 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:15.454446 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:15.556767 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:15.556806 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:15.573628 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:15.573670 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:15.638801 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:15.629487    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.630048    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.631845    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.632428    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.634191    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:15.629487    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.630048    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.631845    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.632428    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.634191    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:15.638865 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:15.638886 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:15.663907 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:15.663944 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:18.197135 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:18.208099 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:18.208177 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:18.234350 2088124 cri.go:89] found id: ""
	I1216 04:16:18.234379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.234388 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:18.234394 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:18.234459 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:18.258985 2088124 cri.go:89] found id: ""
	I1216 04:16:18.259013 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.259022 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:18.259028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:18.259110 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:18.284132 2088124 cri.go:89] found id: ""
	I1216 04:16:18.284156 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.284164 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:18.284171 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:18.284230 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:18.309961 2088124 cri.go:89] found id: ""
	I1216 04:16:18.309989 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.309997 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:18.310004 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:18.310108 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:18.336186 2088124 cri.go:89] found id: ""
	I1216 04:16:18.336212 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.336221 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:18.336228 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:18.336289 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:18.361829 2088124 cri.go:89] found id: ""
	I1216 04:16:18.361858 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.361867 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:18.361874 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:18.361934 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:18.388363 2088124 cri.go:89] found id: ""
	I1216 04:16:18.388385 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.388394 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:18.388400 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:18.388463 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:18.416963 2088124 cri.go:89] found id: ""
	I1216 04:16:18.416988 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.416996 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:18.417006 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:18.417018 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:18.500995 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:18.503604 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:18.521452 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:18.521531 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:18.589729 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:18.580618    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.581508    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583296    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583964    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.585797    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:18.580618    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.581508    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583296    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583964    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.585797    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:18.589761 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:18.589775 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:18.616012 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:18.616047 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:21.144794 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:21.155656 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:21.155729 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:21.184379 2088124 cri.go:89] found id: ""
	I1216 04:16:21.184403 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.184411 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:21.184417 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:21.184484 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:21.210137 2088124 cri.go:89] found id: ""
	I1216 04:16:21.210163 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.210172 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:21.210178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:21.210240 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:21.235283 2088124 cri.go:89] found id: ""
	I1216 04:16:21.235307 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.235315 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:21.235321 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:21.235381 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:21.263715 2088124 cri.go:89] found id: ""
	I1216 04:16:21.263738 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.263746 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:21.263753 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:21.263823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:21.287600 2088124 cri.go:89] found id: ""
	I1216 04:16:21.287624 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.287632 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:21.287638 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:21.287698 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:21.315897 2088124 cri.go:89] found id: ""
	I1216 04:16:21.315919 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.315927 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:21.315934 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:21.315993 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:21.339842 2088124 cri.go:89] found id: ""
	I1216 04:16:21.339866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.339874 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:21.339880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:21.339939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:21.364501 2088124 cri.go:89] found id: ""
	I1216 04:16:21.364526 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.364535 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:21.364544 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:21.364556 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:21.379974 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:21.380060 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:21.474639 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:21.442912    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.443871    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.447436    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.448028    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.468365    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:21.442912    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.443871    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.447436    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.448028    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.468365    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:21.474664 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:21.474676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:21.531857 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:21.531938 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:21.561122 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:21.561149 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:24.116616 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:24.126986 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:24.127075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:24.154481 2088124 cri.go:89] found id: ""
	I1216 04:16:24.154507 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.154526 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:24.154533 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:24.154591 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:24.180064 2088124 cri.go:89] found id: ""
	I1216 04:16:24.180087 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.180095 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:24.180103 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:24.180165 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:24.205398 2088124 cri.go:89] found id: ""
	I1216 04:16:24.205424 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.205433 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:24.205440 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:24.205499 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:24.230340 2088124 cri.go:89] found id: ""
	I1216 04:16:24.230369 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.230377 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:24.230384 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:24.230445 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:24.255009 2088124 cri.go:89] found id: ""
	I1216 04:16:24.255056 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.255066 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:24.255072 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:24.255131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:24.280187 2088124 cri.go:89] found id: ""
	I1216 04:16:24.280214 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.280224 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:24.280230 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:24.280287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:24.304688 2088124 cri.go:89] found id: ""
	I1216 04:16:24.304711 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.304720 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:24.304726 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:24.304788 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:24.329482 2088124 cri.go:89] found id: ""
	I1216 04:16:24.329505 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.329514 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:24.329523 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:24.329535 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:24.345077 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:24.345106 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:24.410594 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:24.402842    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.403261    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.404850    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.405187    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.406756    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:24.402842    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.403261    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.404850    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.405187    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.406756    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:24.410665 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:24.410695 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:24.437142 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:24.437180 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:24.512425 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:24.512454 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:27.075945 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:27.086676 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:27.086751 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:27.111375 2088124 cri.go:89] found id: ""
	I1216 04:16:27.111402 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.111411 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:27.111418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:27.111479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:27.136068 2088124 cri.go:89] found id: ""
	I1216 04:16:27.136100 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.136109 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:27.136115 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:27.136174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:27.160473 2088124 cri.go:89] found id: ""
	I1216 04:16:27.160503 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.160513 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:27.160519 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:27.160580 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:27.186608 2088124 cri.go:89] found id: ""
	I1216 04:16:27.186632 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.186639 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:27.186646 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:27.186708 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:27.217149 2088124 cri.go:89] found id: ""
	I1216 04:16:27.217173 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.217182 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:27.217189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:27.217253 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:27.243558 2088124 cri.go:89] found id: ""
	I1216 04:16:27.243583 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.243592 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:27.243598 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:27.243665 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:27.269387 2088124 cri.go:89] found id: ""
	I1216 04:16:27.269415 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.269425 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:27.269433 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:27.269494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:27.296709 2088124 cri.go:89] found id: ""
	I1216 04:16:27.296778 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.296790 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:27.296800 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:27.296811 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:27.327331 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:27.327359 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:27.384171 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:27.384206 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:27.400922 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:27.400958 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:27.528794 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:27.519212    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.519883    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521482    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521988    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.523599    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:27.519212    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.519883    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521482    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521988    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.523599    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:27.528819 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:27.528835 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:30.057685 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:30.079715 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:30.079801 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:30.110026 2088124 cri.go:89] found id: ""
	I1216 04:16:30.110054 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.110063 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:30.110076 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:30.110143 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:30.137960 2088124 cri.go:89] found id: ""
	I1216 04:16:30.137986 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.137994 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:30.138001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:30.138065 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:30.165148 2088124 cri.go:89] found id: ""
	I1216 04:16:30.165177 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.165186 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:30.165194 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:30.165283 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:30.192836 2088124 cri.go:89] found id: ""
	I1216 04:16:30.192866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.192875 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:30.192883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:30.192951 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:30.220187 2088124 cri.go:89] found id: ""
	I1216 04:16:30.220213 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.220227 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:30.220233 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:30.220333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:30.247843 2088124 cri.go:89] found id: ""
	I1216 04:16:30.247872 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.247882 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:30.247889 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:30.247980 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:30.274429 2088124 cri.go:89] found id: ""
	I1216 04:16:30.274454 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.274463 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:30.274470 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:30.274583 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:30.302775 2088124 cri.go:89] found id: ""
	I1216 04:16:30.302809 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.302819 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:30.302844 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:30.302863 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:30.318968 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:30.318999 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:30.383767 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:30.374814    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.375258    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.376919    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.377544    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.379234    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:30.374814    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.375258    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.376919    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.377544    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.379234    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:30.383790 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:30.383804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:30.410095 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:30.410131 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:30.468723 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:30.468804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:33.056394 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:33.067079 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:33.067155 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:33.092150 2088124 cri.go:89] found id: ""
	I1216 04:16:33.092178 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.092188 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:33.092194 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:33.092260 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:33.117824 2088124 cri.go:89] found id: ""
	I1216 04:16:33.117852 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.117861 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:33.117868 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:33.117927 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:33.143646 2088124 cri.go:89] found id: ""
	I1216 04:16:33.143672 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.143680 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:33.143686 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:33.143744 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:33.169791 2088124 cri.go:89] found id: ""
	I1216 04:16:33.169818 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.169826 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:33.169833 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:33.169893 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:33.194288 2088124 cri.go:89] found id: ""
	I1216 04:16:33.194313 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.194323 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:33.194329 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:33.194388 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:33.221028 2088124 cri.go:89] found id: ""
	I1216 04:16:33.221062 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.221071 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:33.221078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:33.221178 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:33.245742 2088124 cri.go:89] found id: ""
	I1216 04:16:33.245769 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.245778 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:33.245784 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:33.245852 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:33.270847 2088124 cri.go:89] found id: ""
	I1216 04:16:33.270870 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.270879 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:33.270888 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:33.270899 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:33.327247 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:33.327283 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:33.342917 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:33.342947 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:33.407775 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:33.399278   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.399947   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401468   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401954   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.403432   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:33.399278   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.399947   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401468   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401954   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.403432   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:33.407796 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:33.407809 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:33.433956 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:33.433990 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:36.019705 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:36.031406 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:36.031494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:36.061621 2088124 cri.go:89] found id: ""
	I1216 04:16:36.061647 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.061657 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:36.061664 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:36.061730 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:36.088137 2088124 cri.go:89] found id: ""
	I1216 04:16:36.088162 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.088171 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:36.088178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:36.088239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:36.113810 2088124 cri.go:89] found id: ""
	I1216 04:16:36.113833 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.113842 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:36.113849 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:36.113913 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:36.139840 2088124 cri.go:89] found id: ""
	I1216 04:16:36.139866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.139874 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:36.139883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:36.139965 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:36.168529 2088124 cri.go:89] found id: ""
	I1216 04:16:36.168553 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.168561 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:36.168567 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:36.168627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:36.196976 2088124 cri.go:89] found id: ""
	I1216 04:16:36.197002 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.197027 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:36.197050 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:36.197133 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:36.221877 2088124 cri.go:89] found id: ""
	I1216 04:16:36.221903 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.221912 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:36.221918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:36.222032 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:36.248921 2088124 cri.go:89] found id: ""
	I1216 04:16:36.248947 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.248956 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:36.248966 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:36.248977 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:36.264593 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:36.264622 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:36.329217 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:36.319688   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.320663   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.322347   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.323118   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.324854   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:36.319688   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.320663   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.322347   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.323118   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.324854   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:36.329239 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:36.329252 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:36.354482 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:36.354514 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:36.382824 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:36.382890 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:38.944004 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:38.957491 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:38.957613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:38.982761 2088124 cri.go:89] found id: ""
	I1216 04:16:38.982787 2088124 logs.go:282] 0 containers: []
	W1216 04:16:38.982796 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:38.982803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:38.982861 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:39.010506 2088124 cri.go:89] found id: ""
	I1216 04:16:39.010532 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.010542 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:39.010549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:39.010630 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:39.035827 2088124 cri.go:89] found id: ""
	I1216 04:16:39.035853 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.035862 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:39.035875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:39.035934 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:39.060421 2088124 cri.go:89] found id: ""
	I1216 04:16:39.060448 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.060457 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:39.060463 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:39.060550 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:39.087481 2088124 cri.go:89] found id: ""
	I1216 04:16:39.087504 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.087512 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:39.087518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:39.087577 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:39.111994 2088124 cri.go:89] found id: ""
	I1216 04:16:39.112028 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.112037 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:39.112044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:39.112114 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:39.136060 2088124 cri.go:89] found id: ""
	I1216 04:16:39.136093 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.136101 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:39.136108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:39.136186 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:39.166063 2088124 cri.go:89] found id: ""
	I1216 04:16:39.166090 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.166099 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:39.166109 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:39.166120 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:39.222912 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:39.222949 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:39.239064 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:39.239096 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:39.305289 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:39.297129   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.297723   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299366   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299828   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.301323   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:39.297129   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.297723   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299366   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299828   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.301323   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:39.305312 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:39.305326 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:39.330965 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:39.330997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:41.862236 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:41.873016 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:41.873089 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:41.900650 2088124 cri.go:89] found id: ""
	I1216 04:16:41.900675 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.900684 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:41.900691 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:41.900754 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:41.924986 2088124 cri.go:89] found id: ""
	I1216 04:16:41.925012 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.925022 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:41.925028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:41.925090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:41.950157 2088124 cri.go:89] found id: ""
	I1216 04:16:41.950182 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.950191 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:41.950197 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:41.950257 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:41.975738 2088124 cri.go:89] found id: ""
	I1216 04:16:41.975763 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.975772 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:41.975778 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:41.975837 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:42.008172 2088124 cri.go:89] found id: ""
	I1216 04:16:42.008203 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.008214 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:42.008221 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:42.008295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:42.036816 2088124 cri.go:89] found id: ""
	I1216 04:16:42.036841 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.036851 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:42.036858 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:42.036969 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:42.066668 2088124 cri.go:89] found id: ""
	I1216 04:16:42.066697 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.066706 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:42.066713 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:42.066787 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:42.098167 2088124 cri.go:89] found id: ""
	I1216 04:16:42.098200 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.098217 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:42.098231 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:42.098245 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:42.184589 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:42.173723   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.175193   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.176598   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.177621   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.179728   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:42.173723   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.175193   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.176598   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.177621   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.179728   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:42.184617 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:42.184635 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:42.214306 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:42.214348 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:42.253172 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:42.253203 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:42.312705 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:42.312757 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:44.831426 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:44.842214 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:44.842287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:44.872804 2088124 cri.go:89] found id: ""
	I1216 04:16:44.872833 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.872843 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:44.872851 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:44.872915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:44.903988 2088124 cri.go:89] found id: ""
	I1216 04:16:44.904064 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.904089 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:44.904108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:44.904200 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:44.930758 2088124 cri.go:89] found id: ""
	I1216 04:16:44.930837 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.930861 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:44.930880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:44.930971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:44.955785 2088124 cri.go:89] found id: ""
	I1216 04:16:44.955809 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.955817 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:44.955823 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:44.955883 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:44.983685 2088124 cri.go:89] found id: ""
	I1216 04:16:44.983762 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.983785 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:44.983800 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:44.983876 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:45.034599 2088124 cri.go:89] found id: ""
	I1216 04:16:45.034623 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.034631 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:45.034639 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:45.034713 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:45.106900 2088124 cri.go:89] found id: ""
	I1216 04:16:45.106927 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.106937 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:45.106945 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:45.107019 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:45.148790 2088124 cri.go:89] found id: ""
	I1216 04:16:45.148816 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.148826 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:45.148837 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:45.148851 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:45.242114 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:45.242166 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:45.275372 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:45.275416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:45.355175 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:45.346532   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.347206   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.348772   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.349213   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.350684   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:45.346532   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.347206   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.348772   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.349213   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.350684   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:45.355241 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:45.355263 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:45.382211 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:45.382248 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:47.915609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:47.927521 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:47.927603 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:47.957165 2088124 cri.go:89] found id: ""
	I1216 04:16:47.957192 2088124 logs.go:282] 0 containers: []
	W1216 04:16:47.957205 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:47.957212 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:47.957278 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:47.983356 2088124 cri.go:89] found id: ""
	I1216 04:16:47.983379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:47.983396 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:47.983408 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:47.983475 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:48.012782 2088124 cri.go:89] found id: ""
	I1216 04:16:48.012807 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.012815 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:48.012822 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:48.012887 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:48.042072 2088124 cri.go:89] found id: ""
	I1216 04:16:48.042096 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.042105 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:48.042111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:48.042172 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:48.066925 2088124 cri.go:89] found id: ""
	I1216 04:16:48.066954 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.066963 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:48.066970 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:48.067032 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:48.097340 2088124 cri.go:89] found id: ""
	I1216 04:16:48.097366 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.097378 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:48.097385 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:48.097470 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:48.126364 2088124 cri.go:89] found id: ""
	I1216 04:16:48.126397 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.126407 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:48.126413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:48.126510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:48.152175 2088124 cri.go:89] found id: ""
	I1216 04:16:48.152199 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.152207 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:48.152217 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:48.152232 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:48.216814 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:48.216861 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:48.235153 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:48.235187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:48.303336 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:48.295476   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.296105   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.297650   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.298118   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.299616   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:48.295476   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.296105   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.297650   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.298118   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.299616   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:48.303404 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:48.303433 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:48.332107 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:48.332175 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:50.863912 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:50.876115 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:50.876205 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:50.902170 2088124 cri.go:89] found id: ""
	I1216 04:16:50.902200 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.902209 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:50.902216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:50.902273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:50.925870 2088124 cri.go:89] found id: ""
	I1216 04:16:50.925903 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.925912 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:50.925918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:50.925986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:50.950257 2088124 cri.go:89] found id: ""
	I1216 04:16:50.950283 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.950293 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:50.950299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:50.950358 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:50.975507 2088124 cri.go:89] found id: ""
	I1216 04:16:50.975531 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.975541 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:50.975547 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:50.975607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:50.999494 2088124 cri.go:89] found id: ""
	I1216 04:16:50.999520 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.999529 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:50.999535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:50.999599 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:51.026658 2088124 cri.go:89] found id: ""
	I1216 04:16:51.026685 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.026694 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:51.026701 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:51.026760 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:51.051749 2088124 cri.go:89] found id: ""
	I1216 04:16:51.051775 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.051784 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:51.051790 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:51.051868 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:51.076898 2088124 cri.go:89] found id: ""
	I1216 04:16:51.076927 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.076938 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:51.076948 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:51.076960 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:51.103255 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:51.103293 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:51.134833 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:51.134859 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:51.193704 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:51.193741 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:51.212900 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:51.212928 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:51.297351 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:51.288083   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.288656   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.289678   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291273   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291873   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:51.288083   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.288656   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.289678   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291273   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291873   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:53.797612 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:53.808331 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:53.808407 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:53.832739 2088124 cri.go:89] found id: ""
	I1216 04:16:53.832807 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.832829 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:53.832850 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:53.832945 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:53.857832 2088124 cri.go:89] found id: ""
	I1216 04:16:53.857869 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.857878 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:53.857885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:53.857954 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:53.885064 2088124 cri.go:89] found id: ""
	I1216 04:16:53.885087 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.885095 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:53.885101 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:53.885158 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:53.913372 2088124 cri.go:89] found id: ""
	I1216 04:16:53.913451 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.913475 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:53.913493 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:53.913586 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:53.940577 2088124 cri.go:89] found id: ""
	I1216 04:16:53.940646 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.940673 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:53.940687 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:53.940764 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:53.966496 2088124 cri.go:89] found id: ""
	I1216 04:16:53.966534 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.966543 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:53.966552 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:53.966623 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:53.992796 2088124 cri.go:89] found id: ""
	I1216 04:16:53.992820 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.992828 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:53.992834 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:53.992896 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:54.019752 2088124 cri.go:89] found id: ""
	I1216 04:16:54.019840 2088124 logs.go:282] 0 containers: []
	W1216 04:16:54.019857 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:54.019868 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:54.019880 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:54.079349 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:54.079394 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:54.098509 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:54.098593 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:54.166447 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:54.157535   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.158625   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160232   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160884   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.162506   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:54.157535   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.158625   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160232   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160884   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.162506   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:54.166510 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:54.166549 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:54.191683 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:54.191718 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:56.719163 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:56.748538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:56.748613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:56.786218 2088124 cri.go:89] found id: ""
	I1216 04:16:56.786244 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.786253 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:56.786259 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:56.786320 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:56.812994 2088124 cri.go:89] found id: ""
	I1216 04:16:56.813016 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.813024 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:56.813031 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:56.813090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:56.841729 2088124 cri.go:89] found id: ""
	I1216 04:16:56.841751 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.841760 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:56.841766 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:56.841825 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:56.870356 2088124 cri.go:89] found id: ""
	I1216 04:16:56.870379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.870387 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:56.870393 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:56.870451 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:56.899841 2088124 cri.go:89] found id: ""
	I1216 04:16:56.899867 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.899877 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:56.899883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:56.899943 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:56.924316 2088124 cri.go:89] found id: ""
	I1216 04:16:56.924343 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.924352 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:56.924359 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:56.924417 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:56.948789 2088124 cri.go:89] found id: ""
	I1216 04:16:56.948815 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.948824 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:56.948830 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:56.948891 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:56.977394 2088124 cri.go:89] found id: ""
	I1216 04:16:56.977423 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.977432 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:56.977441 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:56.977453 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:57.032732 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:57.032770 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:57.048273 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:57.048302 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:57.115644 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:57.106949   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.107590   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109199   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109751   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.111454   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:57.106949   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.107590   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109199   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109751   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.111454   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:57.115665 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:57.115685 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:57.140936 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:57.140971 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:59.669285 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:59.682343 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:59.682415 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:59.722722 2088124 cri.go:89] found id: ""
	I1216 04:16:59.722750 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.722758 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:59.722764 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:59.722824 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:59.778634 2088124 cri.go:89] found id: ""
	I1216 04:16:59.778659 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.778667 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:59.778674 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:59.778733 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:59.817378 2088124 cri.go:89] found id: ""
	I1216 04:16:59.817470 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.817498 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:59.817538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:59.817644 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:59.848330 2088124 cri.go:89] found id: ""
	I1216 04:16:59.848356 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.848365 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:59.848372 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:59.848459 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:59.880033 2088124 cri.go:89] found id: ""
	I1216 04:16:59.880061 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.880074 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:59.880080 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:59.880154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:59.909206 2088124 cri.go:89] found id: ""
	I1216 04:16:59.909231 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.909241 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:59.909248 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:59.909351 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:59.934604 2088124 cri.go:89] found id: ""
	I1216 04:16:59.934630 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.934639 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:59.934646 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:59.934708 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:59.959916 2088124 cri.go:89] found id: ""
	I1216 04:16:59.959994 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.960011 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:59.960022 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:59.960035 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:00.015911 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:00.016018 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:00.105766 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:00.105818 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:00.319730 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:00.290488   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.291031   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.293087   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.295468   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.296159   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:00.290488   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.291031   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.293087   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.295468   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.296159   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:00.319780 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:00.319793 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:00.371509 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:00.371569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:02.957388 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:02.969075 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:02.969174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:02.996244 2088124 cri.go:89] found id: ""
	I1216 04:17:02.996268 2088124 logs.go:282] 0 containers: []
	W1216 04:17:02.996276 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:02.996283 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:02.996351 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:03.035674 2088124 cri.go:89] found id: ""
	I1216 04:17:03.035699 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.035709 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:03.035716 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:03.035786 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:03.063231 2088124 cri.go:89] found id: ""
	I1216 04:17:03.063262 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.063271 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:03.063278 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:03.063348 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:03.090248 2088124 cri.go:89] found id: ""
	I1216 04:17:03.090277 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.090285 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:03.090292 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:03.090357 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:03.118599 2088124 cri.go:89] found id: ""
	I1216 04:17:03.118628 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.118637 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:03.118643 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:03.118705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:03.145364 2088124 cri.go:89] found id: ""
	I1216 04:17:03.145394 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.145403 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:03.145411 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:03.145476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:03.174022 2088124 cri.go:89] found id: ""
	I1216 04:17:03.174047 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.174057 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:03.174064 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:03.174132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:03.201495 2088124 cri.go:89] found id: ""
	I1216 04:17:03.201518 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.201527 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:03.201537 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:03.201549 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:03.259166 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:03.259202 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:03.276281 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:03.276319 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:03.347465 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:03.338991   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.339696   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341341   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341888   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.343162   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:03.338991   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.339696   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341341   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341888   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.343162   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:03.347486 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:03.347499 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:03.374421 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:03.374460 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:05.905789 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:05.917930 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:05.918028 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:05.944068 2088124 cri.go:89] found id: ""
	I1216 04:17:05.944092 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.944100 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:05.944106 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:05.944170 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:05.971887 2088124 cri.go:89] found id: ""
	I1216 04:17:05.971915 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.971924 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:05.971931 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:05.971998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:05.999415 2088124 cri.go:89] found id: ""
	I1216 04:17:05.999452 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.999467 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:05.999474 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:05.999547 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:06.038021 2088124 cri.go:89] found id: ""
	I1216 04:17:06.038109 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.038128 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:06.038138 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:06.038231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:06.069582 2088124 cri.go:89] found id: ""
	I1216 04:17:06.069610 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.069620 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:06.069626 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:06.069702 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:06.102728 2088124 cri.go:89] found id: ""
	I1216 04:17:06.102753 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.102763 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:06.102770 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:06.102846 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:06.131178 2088124 cri.go:89] found id: ""
	I1216 04:17:06.131372 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.131401 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:06.131420 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:06.131527 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:06.158881 2088124 cri.go:89] found id: ""
	I1216 04:17:06.158966 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.158996 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:06.159061 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:06.159098 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:06.185524 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:06.185554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:06.221206 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:06.221235 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:06.280309 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:06.280357 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:06.297032 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:06.297065 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:06.363186 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:06.354862   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.355590   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357189   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357554   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.359080   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:06.354862   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.355590   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357189   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357554   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.359080   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:08.864854 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:08.875530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:08.875607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:08.900341 2088124 cri.go:89] found id: ""
	I1216 04:17:08.900376 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.900386 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:08.900392 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:08.900453 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:08.924614 2088124 cri.go:89] found id: ""
	I1216 04:17:08.924638 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.924647 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:08.924653 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:08.924715 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:08.949702 2088124 cri.go:89] found id: ""
	I1216 04:17:08.949729 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.949738 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:08.949744 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:08.949803 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:08.973818 2088124 cri.go:89] found id: ""
	I1216 04:17:08.973848 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.973858 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:08.973864 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:08.973923 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:08.999010 2088124 cri.go:89] found id: ""
	I1216 04:17:08.999033 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.999079 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:08.999087 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:08.999149 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:09.030095 2088124 cri.go:89] found id: ""
	I1216 04:17:09.030122 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.030131 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:09.030138 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:09.030198 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:09.054300 2088124 cri.go:89] found id: ""
	I1216 04:17:09.054324 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.054332 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:09.054339 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:09.054397 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:09.078301 2088124 cri.go:89] found id: ""
	I1216 04:17:09.078328 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.078337 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:09.078346 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:09.078358 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:09.106185 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:09.106220 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:09.161474 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:09.161513 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:09.177365 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:09.177394 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:09.242353 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:09.233841   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.234299   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236131   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236653   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.238465   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:09.233841   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.234299   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236131   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236653   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.238465   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:09.242378 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:09.242392 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:11.767582 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:11.779587 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:11.779667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:11.806280 2088124 cri.go:89] found id: ""
	I1216 04:17:11.806308 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.806317 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:11.806323 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:11.806386 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:11.831161 2088124 cri.go:89] found id: ""
	I1216 04:17:11.831187 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.831196 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:11.831203 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:11.831262 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:11.859758 2088124 cri.go:89] found id: ""
	I1216 04:17:11.859781 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.859790 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:11.859796 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:11.859853 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:11.884445 2088124 cri.go:89] found id: ""
	I1216 04:17:11.884473 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.884483 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:11.884489 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:11.884567 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:11.909783 2088124 cri.go:89] found id: ""
	I1216 04:17:11.909860 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.909886 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:11.909904 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:11.909989 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:11.934802 2088124 cri.go:89] found id: ""
	I1216 04:17:11.934833 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.934842 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:11.934848 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:11.934909 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:11.961240 2088124 cri.go:89] found id: ""
	I1216 04:17:11.961318 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.961344 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:11.961358 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:11.961431 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:11.985352 2088124 cri.go:89] found id: ""
	I1216 04:17:11.985380 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.985389 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:11.985404 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:11.985416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:12.050891 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:12.042955   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.043613   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045154   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045461   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.046975   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:12.042955   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.043613   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045154   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045461   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.046975   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:12.050912 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:12.050925 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:12.076153 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:12.076186 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:12.108364 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:12.108393 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:12.164122 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:12.164161 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:14.681316 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:14.698056 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:14.698131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:14.764358 2088124 cri.go:89] found id: ""
	I1216 04:17:14.764382 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.764391 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:14.764397 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:14.764468 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:14.792079 2088124 cri.go:89] found id: ""
	I1216 04:17:14.792110 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.792120 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:14.792130 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:14.792197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:14.817831 2088124 cri.go:89] found id: ""
	I1216 04:17:14.817857 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.817867 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:14.817875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:14.817935 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:14.846609 2088124 cri.go:89] found id: ""
	I1216 04:17:14.846638 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.846646 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:14.846653 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:14.846712 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:14.871213 2088124 cri.go:89] found id: ""
	I1216 04:17:14.871237 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.871246 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:14.871255 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:14.871313 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:14.896165 2088124 cri.go:89] found id: ""
	I1216 04:17:14.896192 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.896201 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:14.896208 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:14.896269 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:14.922595 2088124 cri.go:89] found id: ""
	I1216 04:17:14.922621 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.922629 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:14.922635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:14.922698 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:14.949236 2088124 cri.go:89] found id: ""
	I1216 04:17:14.949303 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.949327 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:14.949344 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:14.949356 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:15.027151 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:15.009633   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.011301   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.012737   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.013346   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.016022   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:15.009633   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.011301   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.012737   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.013346   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.016022   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:15.027238 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:15.027269 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:15.060605 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:15.060646 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:15.093643 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:15.093728 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:15.150597 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:15.150635 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:17.668643 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:17.679947 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:17.680020 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:17.722386 2088124 cri.go:89] found id: ""
	I1216 04:17:17.722409 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.722417 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:17.722423 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:17.722487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:17.775941 2088124 cri.go:89] found id: ""
	I1216 04:17:17.775964 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.775974 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:17.775980 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:17.776040 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:17.802436 2088124 cri.go:89] found id: ""
	I1216 04:17:17.802458 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.802467 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:17.802473 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:17.802532 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:17.828371 2088124 cri.go:89] found id: ""
	I1216 04:17:17.828399 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.828409 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:17.828415 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:17.828479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:17.853344 2088124 cri.go:89] found id: ""
	I1216 04:17:17.853370 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.853379 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:17.853386 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:17.853479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:17.881429 2088124 cri.go:89] found id: ""
	I1216 04:17:17.881456 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.881465 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:17.881471 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:17.881533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:17.904862 2088124 cri.go:89] found id: ""
	I1216 04:17:17.904938 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.904961 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:17.904975 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:17.905050 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:17.929897 2088124 cri.go:89] found id: ""
	I1216 04:17:17.929977 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.930001 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:17.930028 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:17.930064 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:17.998744 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:17.990296   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.990760   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.992790   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.993233   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.994439   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:17.990296   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.990760   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.992790   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.993233   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.994439   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:17.998813 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:17.998840 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:18.026132 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:18.026171 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:18.058645 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:18.058676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:18.115432 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:18.115467 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:20.631899 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:20.643452 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:20.643535 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:20.668165 2088124 cri.go:89] found id: ""
	I1216 04:17:20.668190 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.668199 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:20.668205 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:20.668263 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:20.724732 2088124 cri.go:89] found id: ""
	I1216 04:17:20.724759 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.724768 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:20.724774 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:20.724845 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:20.771015 2088124 cri.go:89] found id: ""
	I1216 04:17:20.771058 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.771068 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:20.771075 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:20.771155 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:20.805632 2088124 cri.go:89] found id: ""
	I1216 04:17:20.805662 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.805672 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:20.805679 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:20.805747 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:20.835160 2088124 cri.go:89] found id: ""
	I1216 04:17:20.835226 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.835242 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:20.835249 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:20.835308 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:20.861499 2088124 cri.go:89] found id: ""
	I1216 04:17:20.861522 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.861531 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:20.861538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:20.861595 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:20.885895 2088124 cri.go:89] found id: ""
	I1216 04:17:20.885919 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.885928 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:20.885934 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:20.885998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:20.910445 2088124 cri.go:89] found id: ""
	I1216 04:17:20.910468 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.910477 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:20.910486 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:20.910498 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:20.966176 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:20.966211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:20.983062 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:20.983092 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:21.049819 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:21.041149   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.041819   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.043484   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.044157   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.045775   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:21.041149   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.041819   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.043484   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.044157   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.045775   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:21.049842 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:21.049856 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:21.075330 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:21.075370 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:23.603121 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:23.613760 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:23.613834 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:23.642856 2088124 cri.go:89] found id: ""
	I1216 04:17:23.642882 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.642890 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:23.642897 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:23.642957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:23.671150 2088124 cri.go:89] found id: ""
	I1216 04:17:23.671175 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.671183 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:23.671189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:23.671247 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:23.733230 2088124 cri.go:89] found id: ""
	I1216 04:17:23.733256 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.733265 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:23.733271 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:23.733330 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:23.782653 2088124 cri.go:89] found id: ""
	I1216 04:17:23.782679 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.782688 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:23.782694 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:23.782759 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:23.810224 2088124 cri.go:89] found id: ""
	I1216 04:17:23.810249 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.810259 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:23.810266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:23.810327 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:23.835579 2088124 cri.go:89] found id: ""
	I1216 04:17:23.835604 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.835613 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:23.835620 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:23.835680 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:23.864585 2088124 cri.go:89] found id: ""
	I1216 04:17:23.864610 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.864618 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:23.864625 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:23.864683 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:23.892217 2088124 cri.go:89] found id: ""
	I1216 04:17:23.892294 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.892311 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:23.892322 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:23.892334 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:23.955889 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:23.947392   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.947993   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949516   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949846   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.951412   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:23.947392   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.947993   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949516   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949846   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.951412   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:23.955910 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:23.955929 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:23.983017 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:23.983064 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:24.018919 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:24.018946 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:24.076537 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:24.076578 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:26.592968 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:26.603896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:26.603971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:26.628560 2088124 cri.go:89] found id: ""
	I1216 04:17:26.628583 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.628591 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:26.628597 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:26.628663 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:26.655525 2088124 cri.go:89] found id: ""
	I1216 04:17:26.655549 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.655558 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:26.655564 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:26.655627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:26.681142 2088124 cri.go:89] found id: ""
	I1216 04:17:26.681169 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.681178 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:26.681185 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:26.681245 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:26.726046 2088124 cri.go:89] found id: ""
	I1216 04:17:26.726069 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.726078 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:26.726084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:26.726145 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:26.761483 2088124 cri.go:89] found id: ""
	I1216 04:17:26.761558 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.761570 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:26.761578 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:26.761670 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:26.804988 2088124 cri.go:89] found id: ""
	I1216 04:17:26.805062 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.805085 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:26.805104 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:26.805191 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:26.835017 2088124 cri.go:89] found id: ""
	I1216 04:17:26.835107 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.835132 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:26.835146 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:26.835222 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:26.864963 2088124 cri.go:89] found id: ""
	I1216 04:17:26.864989 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.864998 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:26.865008 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:26.865020 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:26.920931 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:26.920966 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:26.936801 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:26.936828 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:27.001379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:26.993556   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.993942   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.995579   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.996105   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.997606   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:26.993556   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.993942   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.995579   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.996105   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.997606   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:27.001453 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:27.001473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:27.029301 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:27.029338 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:29.560341 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:29.570732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:29.570810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:29.594792 2088124 cri.go:89] found id: ""
	I1216 04:17:29.594819 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.594828 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:29.594835 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:29.594900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:29.619488 2088124 cri.go:89] found id: ""
	I1216 04:17:29.619514 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.619523 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:29.619530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:29.619589 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:29.644688 2088124 cri.go:89] found id: ""
	I1216 04:17:29.644711 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.644720 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:29.644726 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:29.644792 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:29.670117 2088124 cri.go:89] found id: ""
	I1216 04:17:29.670143 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.670152 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:29.670158 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:29.670246 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:29.744231 2088124 cri.go:89] found id: ""
	I1216 04:17:29.744258 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.744267 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:29.744273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:29.744333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:29.784178 2088124 cri.go:89] found id: ""
	I1216 04:17:29.784201 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.784211 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:29.784217 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:29.784278 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:29.813318 2088124 cri.go:89] found id: ""
	I1216 04:17:29.813341 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.813349 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:29.813355 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:29.813414 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:29.841947 2088124 cri.go:89] found id: ""
	I1216 04:17:29.841973 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.841981 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:29.841991 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:29.842003 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:29.872423 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:29.872449 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:29.927890 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:29.927927 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:29.943872 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:29.943903 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:30.030211 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:30.002270   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.003334   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.011701   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.013525   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.017907   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:30.002270   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.003334   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.011701   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.013525   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.017907   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:30.030233 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:30.030247 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:32.571327 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:32.582193 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:32.582264 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:32.614548 2088124 cri.go:89] found id: ""
	I1216 04:17:32.614575 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.614584 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:32.614591 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:32.614656 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:32.639581 2088124 cri.go:89] found id: ""
	I1216 04:17:32.639609 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.639618 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:32.639624 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:32.639690 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:32.664409 2088124 cri.go:89] found id: ""
	I1216 04:17:32.664431 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.664440 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:32.664446 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:32.664540 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:32.702042 2088124 cri.go:89] found id: ""
	I1216 04:17:32.702068 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.702077 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:32.702083 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:32.702143 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:32.744945 2088124 cri.go:89] found id: ""
	I1216 04:17:32.744972 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.744981 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:32.744988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:32.745073 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:32.789635 2088124 cri.go:89] found id: ""
	I1216 04:17:32.789662 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.789671 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:32.789678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:32.789739 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:32.815679 2088124 cri.go:89] found id: ""
	I1216 04:17:32.815707 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.815717 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:32.815724 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:32.815787 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:32.841170 2088124 cri.go:89] found id: ""
	I1216 04:17:32.841195 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.841204 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:32.841213 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:32.841224 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:32.897709 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:32.897747 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:32.913830 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:32.913862 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:32.978618 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:32.969623   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.970476   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972369   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972985   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.974627   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:32.969623   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.970476   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972369   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972985   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.974627   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:32.978642 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:32.978655 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:33.004220 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:33.004272 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:35.534506 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:35.545218 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:35.545290 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:35.570921 2088124 cri.go:89] found id: ""
	I1216 04:17:35.570949 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.570958 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:35.570965 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:35.571023 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:35.596188 2088124 cri.go:89] found id: ""
	I1216 04:17:35.596216 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.596226 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:35.596232 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:35.596290 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:35.621275 2088124 cri.go:89] found id: ""
	I1216 04:17:35.621298 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.621307 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:35.621313 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:35.621373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:35.646280 2088124 cri.go:89] found id: ""
	I1216 04:17:35.646304 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.646312 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:35.646319 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:35.646380 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:35.674777 2088124 cri.go:89] found id: ""
	I1216 04:17:35.674850 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.674874 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:35.674894 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:35.674969 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:35.734693 2088124 cri.go:89] found id: ""
	I1216 04:17:35.734716 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.734725 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:35.734732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:35.734792 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:35.776099 2088124 cri.go:89] found id: ""
	I1216 04:17:35.776121 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.776129 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:35.776136 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:35.776195 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:35.809643 2088124 cri.go:89] found id: ""
	I1216 04:17:35.809720 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.809744 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:35.809765 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:35.809805 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:35.865415 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:35.865452 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:35.880891 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:35.880969 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:35.943467 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:35.935628   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.936425   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938104   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938398   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.939836   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:35.935628   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.936425   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938104   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938398   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.939836   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:35.943485 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:35.943497 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:35.968153 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:35.968187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:38.502135 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:38.512843 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:38.512915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:38.537512 2088124 cri.go:89] found id: ""
	I1216 04:17:38.537537 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.537546 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:38.537553 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:38.537618 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:38.563124 2088124 cri.go:89] found id: ""
	I1216 04:17:38.563159 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.563168 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:38.563174 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:38.563265 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:38.589894 2088124 cri.go:89] found id: ""
	I1216 04:17:38.589918 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.589927 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:38.589933 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:38.590001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:38.615078 2088124 cri.go:89] found id: ""
	I1216 04:17:38.615104 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.615114 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:38.615120 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:38.615188 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:38.640365 2088124 cri.go:89] found id: ""
	I1216 04:17:38.640397 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.640406 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:38.640416 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:38.640486 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:38.664018 2088124 cri.go:89] found id: ""
	I1216 04:17:38.664095 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.664116 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:38.664125 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:38.664194 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:38.704314 2088124 cri.go:89] found id: ""
	I1216 04:17:38.704341 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.704350 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:38.704356 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:38.704415 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:38.747321 2088124 cri.go:89] found id: ""
	I1216 04:17:38.747349 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.747357 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:38.747366 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:38.747377 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:38.778906 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:38.778937 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:38.846005 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:38.837440   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.837970   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.839654   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.840202   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.841808   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:38.837440   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.837970   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.839654   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.840202   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.841808   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:38.846026 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:38.846039 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:38.872344 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:38.872381 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:38.907009 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:38.907060 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:41.467452 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:41.478044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:41.478160 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:41.505036 2088124 cri.go:89] found id: ""
	I1216 04:17:41.505062 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.505072 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:41.505079 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:41.505163 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:41.533010 2088124 cri.go:89] found id: ""
	I1216 04:17:41.533044 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.533054 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:41.533078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:41.533160 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:41.557094 2088124 cri.go:89] found id: ""
	I1216 04:17:41.557166 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.557181 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:41.557188 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:41.557261 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:41.585674 2088124 cri.go:89] found id: ""
	I1216 04:17:41.585718 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.585727 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:41.585734 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:41.585805 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:41.610276 2088124 cri.go:89] found id: ""
	I1216 04:17:41.610311 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.610320 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:41.610327 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:41.610398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:41.636914 2088124 cri.go:89] found id: ""
	I1216 04:17:41.636981 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.637010 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:41.637025 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:41.637097 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:41.665097 2088124 cri.go:89] found id: ""
	I1216 04:17:41.665161 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.665187 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:41.665202 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:41.665279 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:41.727525 2088124 cri.go:89] found id: ""
	I1216 04:17:41.727553 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.727562 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:41.727571 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:41.727589 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:41.817873 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:41.817913 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:41.834790 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:41.834817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:41.903430 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:41.895682   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.896090   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897605   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897915   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.899505   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:41.895682   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.896090   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897605   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897915   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.899505   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:41.903453 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:41.903465 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:41.928600 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:41.928640 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:44.456049 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:44.466779 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:44.466853 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:44.493084 2088124 cri.go:89] found id: ""
	I1216 04:17:44.493110 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.493119 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:44.493126 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:44.493185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:44.517683 2088124 cri.go:89] found id: ""
	I1216 04:17:44.517717 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.517727 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:44.517734 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:44.517810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:44.541717 2088124 cri.go:89] found id: ""
	I1216 04:17:44.541749 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.541758 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:44.541764 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:44.541830 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:44.565684 2088124 cri.go:89] found id: ""
	I1216 04:17:44.565711 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.565723 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:44.565729 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:44.565796 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:44.590246 2088124 cri.go:89] found id: ""
	I1216 04:17:44.590285 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.590293 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:44.590300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:44.590372 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:44.618255 2088124 cri.go:89] found id: ""
	I1216 04:17:44.618284 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.618292 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:44.618299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:44.618367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:44.648191 2088124 cri.go:89] found id: ""
	I1216 04:17:44.648219 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.648228 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:44.648234 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:44.648295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:44.679499 2088124 cri.go:89] found id: ""
	I1216 04:17:44.679574 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.679598 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:44.679615 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:44.679640 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:44.758228 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:44.758267 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:44.779294 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:44.779331 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:44.858723 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:44.849709   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.850588   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852326   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852658   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.854144   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:44.849709   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.850588   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852326   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852658   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.854144   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:44.858749 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:44.858764 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:44.883969 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:44.884008 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:47.413411 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:47.423987 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:47.424106 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:47.449248 2088124 cri.go:89] found id: ""
	I1216 04:17:47.449314 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.449329 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:47.449336 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:47.449398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:47.475548 2088124 cri.go:89] found id: ""
	I1216 04:17:47.475578 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.475587 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:47.475593 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:47.475655 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:47.500110 2088124 cri.go:89] found id: ""
	I1216 04:17:47.500177 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.500199 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:47.500218 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:47.500306 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:47.530628 2088124 cri.go:89] found id: ""
	I1216 04:17:47.530696 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.530723 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:47.530741 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:47.530826 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:47.556437 2088124 cri.go:89] found id: ""
	I1216 04:17:47.556464 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.556473 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:47.556479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:47.556549 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:47.581048 2088124 cri.go:89] found id: ""
	I1216 04:17:47.581071 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.581081 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:47.581088 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:47.581148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:47.606560 2088124 cri.go:89] found id: ""
	I1216 04:17:47.606588 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.606596 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:47.606603 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:47.606663 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:47.640327 2088124 cri.go:89] found id: ""
	I1216 04:17:47.640352 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.640360 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:47.640370 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:47.640388 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:47.702815 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:47.702920 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:47.736710 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:47.736751 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:47.839518 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:47.830501   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.831305   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833129   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833682   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.835432   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:47.830501   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.831305   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833129   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833682   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.835432   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:47.839540 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:47.839554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:47.865722 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:47.865758 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:50.397056 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:50.409097 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:50.409241 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:50.437681 2088124 cri.go:89] found id: ""
	I1216 04:17:50.437704 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.437714 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:50.437743 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:50.437829 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:50.462756 2088124 cri.go:89] found id: ""
	I1216 04:17:50.462783 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.462791 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:50.462798 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:50.462914 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:50.487724 2088124 cri.go:89] found id: ""
	I1216 04:17:50.487751 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.487760 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:50.487767 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:50.487873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:50.513141 2088124 cri.go:89] found id: ""
	I1216 04:17:50.513208 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.513219 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:50.513237 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:50.513315 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:50.538993 2088124 cri.go:89] found id: ""
	I1216 04:17:50.539100 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.539124 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:50.539144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:50.539231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:50.564296 2088124 cri.go:89] found id: ""
	I1216 04:17:50.564319 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.564328 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:50.564335 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:50.564395 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:50.587840 2088124 cri.go:89] found id: ""
	I1216 04:17:50.587865 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.587874 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:50.587880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:50.587941 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:50.616481 2088124 cri.go:89] found id: ""
	I1216 04:17:50.616555 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.616577 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:50.616595 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:50.616611 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:50.674183 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:50.674218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:50.705566 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:50.705596 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:50.817242 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:50.808677   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.809401   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811128   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811612   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.813344   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:50.808677   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.809401   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811128   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811612   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.813344   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:50.817265 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:50.817278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:50.842758 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:50.842792 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:53.372576 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:53.383245 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:53.383313 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:53.407745 2088124 cri.go:89] found id: ""
	I1216 04:17:53.407767 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.407775 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:53.407781 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:53.407839 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:53.435170 2088124 cri.go:89] found id: ""
	I1216 04:17:53.435194 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.435203 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:53.435209 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:53.435268 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:53.461399 2088124 cri.go:89] found id: ""
	I1216 04:17:53.461426 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.461437 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:53.461443 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:53.461504 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:53.492254 2088124 cri.go:89] found id: ""
	I1216 04:17:53.492279 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.492289 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:53.492295 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:53.492356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:53.515778 2088124 cri.go:89] found id: ""
	I1216 04:17:53.515802 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.515810 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:53.515816 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:53.515875 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:53.539474 2088124 cri.go:89] found id: ""
	I1216 04:17:53.539498 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.539508 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:53.539514 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:53.539576 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:53.565164 2088124 cri.go:89] found id: ""
	I1216 04:17:53.565229 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.565255 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:53.565273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:53.565359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:53.589875 2088124 cri.go:89] found id: ""
	I1216 04:17:53.589941 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.589963 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:53.589984 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:53.590026 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:53.654018 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:53.644813   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.645597   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647434   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647966   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.649437   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:53.644813   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.645597   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647434   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647966   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.649437   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:53.654042 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:53.654059 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:53.679510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:53.679548 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:53.719485 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:53.719514 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:53.792435 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:53.792471 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:56.314262 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:56.325267 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:56.325348 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:56.350886 2088124 cri.go:89] found id: ""
	I1216 04:17:56.350908 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.350917 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:56.350923 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:56.350985 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:56.375203 2088124 cri.go:89] found id: ""
	I1216 04:17:56.375230 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.375239 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:56.375246 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:56.375305 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:56.400956 2088124 cri.go:89] found id: ""
	I1216 04:17:56.400980 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.400988 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:56.400994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:56.401055 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:56.426054 2088124 cri.go:89] found id: ""
	I1216 04:17:56.426077 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.426086 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:56.426093 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:56.426154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:56.451881 2088124 cri.go:89] found id: ""
	I1216 04:17:56.451905 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.451914 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:56.451920 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:56.452029 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:56.483163 2088124 cri.go:89] found id: ""
	I1216 04:17:56.483190 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.483199 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:56.483223 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:56.483297 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:56.509283 2088124 cri.go:89] found id: ""
	I1216 04:17:56.509307 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.509316 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:56.509321 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:56.509386 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:56.533713 2088124 cri.go:89] found id: ""
	I1216 04:17:56.533788 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.533813 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:56.533851 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:56.533883 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:56.591786 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:56.591822 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:56.608010 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:56.608041 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:56.677352 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:56.669278   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.669934   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.671527   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.672102   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.673230   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:56.669278   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.669934   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.671527   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.672102   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.673230   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:56.677375 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:56.677388 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:56.710597 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:56.710632 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:59.260233 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:59.274612 2088124 out.go:203] 
	W1216 04:17:59.277673 2088124 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 04:17:59.277728 2088124 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 04:17:59.277743 2088124 out.go:285] * Related issues:
	* Related issues:
	W1216 04:17:59.277759 2088124 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1216 04:17:59.277770 2088124 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1216 04:17:59.280576 2088124 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-450938
helpers_test.go:244: (dbg) docker inspect newest-cni-450938:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	        "Created": "2025-12-16T04:01:45.321904496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2088249,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:11:49.715157618Z",
	            "FinishedAt": "2025-12-16T04:11:48.344695153Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hosts",
	        "LogPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65-json.log",
	        "Name": "/newest-cni-450938",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-450938:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-450938",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	                "LowerDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-450938",
	                "Source": "/var/lib/docker/volumes/newest-cni-450938/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-450938",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-450938",
	                "name.minikube.sigs.k8s.io": "newest-cni-450938",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0d040f98e420d560a9e17e89d3d7fe4b27a499b96ccdebe83fcb72878ac3aa5a",
	            "SandboxKey": "/var/run/docker/netns/0d040f98e420",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34669"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34673"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34671"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34672"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-450938": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:64:e7:5f:26:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "961937bd6f37532287f488797e74382e326ca0852d2ef3f8a1d23a546f1f7d1a",
	                    "EndpointID": "06c1897ed9171a5e6bbd198d06b0b6b16523d38b6c9e3e64ec0084e4fa9e4f3b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-450938",
	                        "e2dde4cac2e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (326.28518ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-450938 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-450938 logs -n 25: (1.543754309s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-092028 image list --format=json                                                                                                                                                                                                                │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ pause   │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ unpause │ -p embed-certs-092028 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:02 UTC │                     │
	│ stop    │ -p no-preload-255023 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ addons  │ enable dashboard -p no-preload-255023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ start   │ -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	│ stop    │ -p newest-cni-450938 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-450938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:11:49
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:11:49.443609 2088124 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:11:49.443766 2088124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:49.443791 2088124 out.go:374] Setting ErrFile to fd 2...
	I1216 04:11:49.443797 2088124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:49.444086 2088124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:11:49.444552 2088124 out.go:368] Setting JSON to false
	I1216 04:11:49.445491 2088124 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35654,"bootTime":1765822656,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:11:49.445560 2088124 start.go:143] virtualization:  
	I1216 04:11:49.450767 2088124 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:11:49.453684 2088124 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:11:49.453830 2088124 notify.go:221] Checking for updates...
	I1216 04:11:49.459490 2088124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:11:49.462425 2088124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:49.465199 2088124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:11:49.468049 2088124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:11:49.470926 2088124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:11:49.474323 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:49.474898 2088124 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:11:49.507547 2088124 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:11:49.507675 2088124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:49.559588 2088124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:11:49.550344871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:49.559694 2088124 docker.go:319] overlay module found
	I1216 04:11:49.564661 2088124 out.go:179] * Using the docker driver based on existing profile
	I1216 04:11:49.567577 2088124 start.go:309] selected driver: docker
	I1216 04:11:49.567592 2088124 start.go:927] validating driver "docker" against &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:49.567688 2088124 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:11:49.568412 2088124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:49.630893 2088124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:11:49.62154899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:49.631269 2088124 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:11:49.631299 2088124 cni.go:84] Creating CNI manager for ""
	I1216 04:11:49.631354 2088124 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:11:49.631398 2088124 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:49.634471 2088124 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:11:49.637273 2088124 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:11:49.640282 2088124 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:11:49.643072 2088124 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:11:49.643109 2088124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:11:49.643124 2088124 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:11:49.643134 2088124 cache.go:65] Caching tarball of preloaded images
	I1216 04:11:49.643213 2088124 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:11:49.643223 2088124 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:11:49.643349 2088124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:11:49.663232 2088124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:11:49.663256 2088124 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:11:49.663277 2088124 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:11:49.663307 2088124 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:11:49.663368 2088124 start.go:364] duration metric: took 37.825µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:11:49.663390 2088124 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:11:49.663398 2088124 fix.go:54] fixHost starting: 
	I1216 04:11:49.663657 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:49.680807 2088124 fix.go:112] recreateIfNeeded on newest-cni-450938: state=Stopped err=<nil>
	W1216 04:11:49.680842 2088124 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:11:49.684150 2088124 out.go:252] * Restarting existing docker container for "newest-cni-450938" ...
	I1216 04:11:49.684240 2088124 cli_runner.go:164] Run: docker start newest-cni-450938
	I1216 04:11:49.955342 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:49.981840 2088124 kic.go:430] container "newest-cni-450938" state is running.
	I1216 04:11:49.982211 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:50.021278 2088124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:11:50.021527 2088124 machine.go:94] provisionDockerMachine start ...
	I1216 04:11:50.021596 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:50.049595 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:50.050060 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:50.050075 2088124 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:11:50.050748 2088124 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:11:53.188290 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:11:53.188358 2088124 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:11:53.188485 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:53.208640 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:53.208973 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:53.208992 2088124 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:11:53.354850 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:11:53.354932 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:53.373349 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:53.373653 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:53.373677 2088124 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:11:53.507317 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:11:53.507346 2088124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:11:53.507369 2088124 ubuntu.go:190] setting up certificates
	I1216 04:11:53.507379 2088124 provision.go:84] configureAuth start
	I1216 04:11:53.507463 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:53.525162 2088124 provision.go:143] copyHostCerts
	I1216 04:11:53.525241 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:11:53.525251 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:11:53.525327 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:11:53.525423 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:11:53.525428 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:11:53.525453 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:11:53.525509 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:11:53.525514 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:11:53.525536 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:11:53.525580 2088124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:11:54.045695 2088124 provision.go:177] copyRemoteCerts
	I1216 04:11:54.045768 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:11:54.045810 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.066867 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.167270 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:11:54.185959 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:11:54.204990 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:11:54.223347 2088124 provision.go:87] duration metric: took 715.940901ms to configureAuth
	I1216 04:11:54.223373 2088124 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:11:54.223571 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:54.223579 2088124 machine.go:97] duration metric: took 4.202043696s to provisionDockerMachine
	I1216 04:11:54.223586 2088124 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:11:54.223597 2088124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:11:54.223657 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:11:54.223694 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.241386 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.339071 2088124 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:11:54.342372 2088124 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:11:54.342404 2088124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:11:54.342417 2088124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:11:54.342476 2088124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:11:54.342569 2088124 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:11:54.342679 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:11:54.350184 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:11:54.367994 2088124 start.go:296] duration metric: took 144.392831ms for postStartSetup
	I1216 04:11:54.368092 2088124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:11:54.368136 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.385560 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.484799 2088124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:11:54.491513 2088124 fix.go:56] duration metric: took 4.828106411s for fixHost
	I1216 04:11:54.491541 2088124 start.go:83] releasing machines lock for "newest-cni-450938", held for 4.82816163s
	I1216 04:11:54.491612 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:54.509094 2088124 ssh_runner.go:195] Run: cat /version.json
	I1216 04:11:54.509138 2088124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:11:54.509150 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.509206 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.527383 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.529259 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.622646 2088124 ssh_runner.go:195] Run: systemctl --version
	I1216 04:11:54.714029 2088124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:11:54.718486 2088124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:11:54.718568 2088124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:11:54.726541 2088124 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:11:54.726568 2088124 start.go:496] detecting cgroup driver to use...
	I1216 04:11:54.726632 2088124 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:11:54.726714 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:11:54.745031 2088124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:11:54.758297 2088124 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:11:54.758370 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:11:54.774348 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:11:54.787565 2088124 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:11:54.906330 2088124 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:11:55.031458 2088124 docker.go:234] disabling docker service ...
	I1216 04:11:55.031602 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:11:55.047495 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:11:55.061071 2088124 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:11:55.176474 2088124 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:11:55.308037 2088124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:11:55.321108 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:11:55.335545 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:11:55.344904 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:11:55.354341 2088124 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:11:55.354432 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:11:55.364241 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:11:55.373363 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:11:55.382311 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:11:55.391427 2088124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:11:55.399573 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:11:55.408617 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:11:55.417842 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:11:55.427155 2088124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:11:55.435028 2088124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:11:55.442465 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:55.555794 2088124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:11:55.675355 2088124 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:11:55.675506 2088124 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:11:55.679491 2088124 start.go:564] Will wait 60s for crictl version
	I1216 04:11:55.679606 2088124 ssh_runner.go:195] Run: which crictl
	I1216 04:11:55.683263 2088124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:11:55.706762 2088124 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:11:55.706911 2088124 ssh_runner.go:195] Run: containerd --version
	I1216 04:11:55.726295 2088124 ssh_runner.go:195] Run: containerd --version
	I1216 04:11:55.754045 2088124 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:11:55.757209 2088124 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:11:55.773141 2088124 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:11:55.777028 2088124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:55.790127 2088124 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:11:55.792976 2088124 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:11:55.793134 2088124 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:11:55.793224 2088124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:55.820865 2088124 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:11:55.820893 2088124 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:11:55.820953 2088124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:55.848708 2088124 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:11:55.848733 2088124 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:11:55.848741 2088124 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:11:55.848865 2088124 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:11:55.848944 2088124 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:11:55.877782 2088124 cni.go:84] Creating CNI manager for ""
	I1216 04:11:55.877809 2088124 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:11:55.877833 2088124 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:11:55.877856 2088124 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:11:55.877980 2088124 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:11:55.878053 2088124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:11:55.886063 2088124 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:11:55.886135 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:11:55.893994 2088124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:11:55.906976 2088124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:11:55.921636 2088124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:11:55.935475 2088124 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:11:55.940181 2088124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:55.958241 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:56.086097 2088124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:56.102803 2088124 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:11:56.102828 2088124 certs.go:195] generating shared ca certs ...
	I1216 04:11:56.102856 2088124 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.103007 2088124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:11:56.103163 2088124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:11:56.103175 2088124 certs.go:257] generating profile certs ...
	I1216 04:11:56.103292 2088124 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:11:56.103376 2088124 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:11:56.103427 2088124 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:11:56.103545 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:11:56.103587 2088124 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:11:56.103600 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:11:56.103627 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:11:56.103658 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:11:56.103686 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:11:56.103735 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:11:56.104338 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:11:56.126254 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:11:56.147493 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:11:56.167667 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:11:56.186450 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:11:56.204453 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:11:56.222875 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:11:56.240385 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:11:56.257955 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:11:56.276171 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:11:56.293848 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:11:56.311719 2088124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:11:56.324807 2088124 ssh_runner.go:195] Run: openssl version
	I1216 04:11:56.331262 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.338764 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:11:56.346054 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.349987 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.350052 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.391179 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:11:56.398825 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.406218 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:11:56.413696 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.417638 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.417705 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.459490 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:11:56.466920 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.474252 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:11:56.481440 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.485119 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.485259 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.526344 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:11:56.533907 2088124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:11:56.537774 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:11:56.578487 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:11:56.619729 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:11:56.660999 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:11:56.702232 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:11:56.744306 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:11:56.785680 2088124 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:56.785803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:11:56.785870 2088124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:11:56.816785 2088124 cri.go:89] found id: ""
	I1216 04:11:56.816890 2088124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:11:56.824683 2088124 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:11:56.824744 2088124 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:11:56.824813 2088124 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:11:56.832253 2088124 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:11:56.832838 2088124 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-450938" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:56.833086 2088124 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-450938" cluster setting kubeconfig missing "newest-cni-450938" context setting]
	I1216 04:11:56.833830 2088124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.835841 2088124 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:11:56.846568 2088124 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 04:11:56.846607 2088124 kubeadm.go:602] duration metric: took 21.839206ms to restartPrimaryControlPlane
	I1216 04:11:56.846659 2088124 kubeadm.go:403] duration metric: took 60.947212ms to StartCluster
	I1216 04:11:56.846683 2088124 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.846774 2088124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:56.847954 2088124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.848288 2088124 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:11:56.848543 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:56.848590 2088124 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:11:56.848653 2088124 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-450938"
	I1216 04:11:56.848667 2088124 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-450938"
	I1216 04:11:56.848690 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.849140 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.849662 2088124 addons.go:70] Setting dashboard=true in profile "newest-cni-450938"
	I1216 04:11:56.849685 2088124 addons.go:239] Setting addon dashboard=true in "newest-cni-450938"
	W1216 04:11:56.849692 2088124 addons.go:248] addon dashboard should already be in state true
	I1216 04:11:56.849725 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.850155 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.851797 2088124 addons.go:70] Setting default-storageclass=true in profile "newest-cni-450938"
	I1216 04:11:56.851835 2088124 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-450938"
	I1216 04:11:56.852230 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.854311 2088124 out.go:179] * Verifying Kubernetes components...
	I1216 04:11:56.857550 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:56.877736 2088124 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 04:11:56.883198 2088124 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 04:11:56.888994 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 04:11:56.889023 2088124 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 04:11:56.889099 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.905463 2088124 addons.go:239] Setting addon default-storageclass=true in "newest-cni-450938"
	I1216 04:11:56.905510 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.905917 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.906132 2088124 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:11:56.909026 2088124 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:56.909049 2088124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:11:56.909124 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.939233 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:56.960779 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:56.969260 2088124 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:56.969285 2088124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:11:56.969344 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.994990 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:57.096083 2088124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:57.153660 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:57.154691 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 04:11:57.154741 2088124 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 04:11:57.179948 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:57.181646 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 04:11:57.181698 2088124 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 04:11:57.220157 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 04:11:57.220192 2088124 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 04:11:57.270420 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 04:11:57.270450 2088124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 04:11:57.289844 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 04:11:57.289925 2088124 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 04:11:57.304564 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 04:11:57.304589 2088124 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 04:11:57.318199 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 04:11:57.318268 2088124 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 04:11:57.331721 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 04:11:57.331747 2088124 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 04:11:57.344689 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:11:57.344766 2088124 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 04:11:57.358118 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:57.937381 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937417 2088124 retry.go:31] will retry after 269.480362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:57.937480 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937486 2088124 retry.go:31] will retry after 229.28952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:57.937664 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937674 2088124 retry.go:31] will retry after 277.329171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937800 2088124 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:11:57.937903 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:58.167607 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:58.207320 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:58.215928 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:58.286306 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.286392 2088124 retry.go:31] will retry after 251.551644ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:58.336689 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.336775 2088124 retry.go:31] will retry after 297.618581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:58.344615 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.344703 2088124 retry.go:31] will retry after 371.748045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.438848 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:58.538550 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:11:58.607193 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.607227 2088124 retry.go:31] will retry after 295.364456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.635597 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:11:58.705620 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.705655 2088124 retry.go:31] will retry after 548.313742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.716963 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:58.791977 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.792012 2088124 retry.go:31] will retry after 352.878163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.903095 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:58.938720 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:11:58.980189 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.980231 2088124 retry.go:31] will retry after 538.903986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.145753 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:59.214092 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.214141 2088124 retry.go:31] will retry after 822.609154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.254394 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:11:59.315668 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.315705 2088124 retry.go:31] will retry after 808.232785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.439021 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:59.520292 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:11:59.580253 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.580290 2088124 retry.go:31] will retry after 1.339162464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.938854 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:00.037859 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:12:00.126588 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:00.271287 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.271330 2088124 retry.go:31] will retry after 1.560463337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:12:00.271395 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.271405 2088124 retry.go:31] will retry after 965.630874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.439512 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:00.919713 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:00.938198 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:01.016821 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.016853 2088124 retry.go:31] will retry after 2.723457612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.238128 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:01.299810 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.299846 2088124 retry.go:31] will retry after 1.407497229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.438022 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:01.832831 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:01.895982 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.896019 2088124 retry.go:31] will retry after 1.861173275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.938295 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:02.438804 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:02.708270 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:02.778471 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:02.778510 2088124 retry.go:31] will retry after 3.48676176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:02.938901 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:03.438053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:03.740586 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:03.758141 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:03.823512 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.823549 2088124 retry.go:31] will retry after 3.513983603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:12:03.840241 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.840277 2088124 retry.go:31] will retry after 3.549700703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.938636 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:04.438975 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:04.938051 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:05.438813 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:05.938079 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:06.265883 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:06.326297 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:06.326330 2088124 retry.go:31] will retry after 5.907729831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:06.438566 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:06.938552 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:07.337994 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:07.390520 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:07.400091 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.400119 2088124 retry.go:31] will retry after 4.07949146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.438412 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:07.458870 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.458913 2088124 retry.go:31] will retry after 5.738742007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.938058 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:08.438048 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:08.938086 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:09.438088 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:09.938071 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:10.438982 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:10.938817 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:11.438560 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:11.480608 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:11.544274 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:11.544311 2088124 retry.go:31] will retry after 7.489839912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:11.938962 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:12.234793 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:12.294760 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:12.294795 2088124 retry.go:31] will retry after 8.284230916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:12.438042 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:12.938369 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:13.198743 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:13.273972 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:13.274008 2088124 retry.go:31] will retry after 8.727161897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:13.438137 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:13.938122 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:14.438053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:14.938105 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:15.438117 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:15.938675 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:16.438275 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:16.938051 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:17.438977 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:17.938090 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:18.438139 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:18.938875 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.034608 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:19.095129 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:19.095161 2088124 retry.go:31] will retry after 13.285449955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:19.438765 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.938027 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:20.438947 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:20.579839 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:20.651187 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:20.651287 2088124 retry.go:31] will retry after 8.595963064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:20.938552 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:21.438919 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:21.938886 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:22.001902 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:22.069854 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:22.069889 2088124 retry.go:31] will retry after 9.875475964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:22.438071 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:22.938057 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:23.438759 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:23.938093 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:24.438012 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:24.938036 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:25.438056 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:25.938060 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:26.438683 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:26.938545 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:27.438839 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:27.938076 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:28.438528 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:28.938942 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:29.247522 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:29.319498 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:29.319530 2088124 retry.go:31] will retry after 11.610992075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:29.438808 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:29.938634 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:30.438853 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:30.938004 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.438055 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.939022 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.945765 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:32.028672 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.028710 2088124 retry.go:31] will retry after 8.660108846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.380884 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:32.438451 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:32.451845 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.451878 2088124 retry.go:31] will retry after 20.587741489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.939020 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:33.438637 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:33.939026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:34.438183 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:34.938889 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:35.438058 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:35.938079 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:36.438040 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:36.938449 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:37.438932 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:37.938711 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:38.438609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:38.938102 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:39.438039 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:39.938131 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:40.438724 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:40.689879 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:40.758598 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:40.758633 2088124 retry.go:31] will retry after 22.619838961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:40.931114 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:12:40.938807 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:41.022703 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:41.022737 2088124 retry.go:31] will retry after 26.329717671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:41.438070 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:41.938106 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:42.438073 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:42.938708 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:43.438842 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:43.938877 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:44.438603 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:44.938026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:45.438387 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:45.938042 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:46.438913 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:46.938061 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:47.438724 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:47.938106 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:48.438105 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:48.938608 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:49.438052 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:49.938137 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:50.438126 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:50.938158 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:51.438047 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:51.938036 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:52.437993 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:52.938585 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:53.040311 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:53.100279 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:53.100315 2088124 retry.go:31] will retry after 25.050501438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:53.438735 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:53.938047 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:54.438981 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:54.938826 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:55.438076 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:55.938982 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:56.438082 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:56.938775 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:12:56.938878 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:12:56.965240 2088124 cri.go:89] found id: ""
	I1216 04:12:56.965267 2088124 logs.go:282] 0 containers: []
	W1216 04:12:56.965275 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:12:56.965282 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:12:56.965342 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:12:56.994326 2088124 cri.go:89] found id: ""
	I1216 04:12:56.994352 2088124 logs.go:282] 0 containers: []
	W1216 04:12:56.994361 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:12:56.994368 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:12:56.994428 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:12:57.023992 2088124 cri.go:89] found id: ""
	I1216 04:12:57.024019 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.024028 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:12:57.024034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:12:57.024096 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:12:57.048533 2088124 cri.go:89] found id: ""
	I1216 04:12:57.048557 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.048564 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:12:57.048571 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:12:57.048633 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:12:57.073452 2088124 cri.go:89] found id: ""
	I1216 04:12:57.073477 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.073489 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:12:57.073495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:12:57.073556 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:12:57.098320 2088124 cri.go:89] found id: ""
	I1216 04:12:57.098343 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.098351 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:12:57.098358 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:12:57.098422 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:12:57.122156 2088124 cri.go:89] found id: ""
	I1216 04:12:57.122178 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.122186 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:12:57.122192 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:12:57.122253 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:12:57.146348 2088124 cri.go:89] found id: ""
	I1216 04:12:57.146371 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.146379 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:12:57.146389 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:12:57.146400 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:12:57.204504 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:12:57.204554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:12:57.222444 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:12:57.222477 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:12:57.295723 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:12:57.287802    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.288197    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.289767    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.290151    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.291759    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:12:57.287802    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.288197    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.289767    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.290151    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.291759    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:12:57.295745 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:12:57.295758 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:12:57.320926 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:12:57.320959 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:12:59.851668 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:59.862238 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:12:59.862307 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:12:59.886311 2088124 cri.go:89] found id: ""
	I1216 04:12:59.886338 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.886346 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:12:59.886353 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:12:59.886412 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:12:59.910403 2088124 cri.go:89] found id: ""
	I1216 04:12:59.910426 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.910434 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:12:59.910440 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:12:59.910498 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:12:59.935230 2088124 cri.go:89] found id: ""
	I1216 04:12:59.935253 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.935262 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:12:59.935268 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:12:59.935329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:12:59.958999 2088124 cri.go:89] found id: ""
	I1216 04:12:59.959022 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.959030 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:12:59.959037 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:12:59.959113 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:12:59.984633 2088124 cri.go:89] found id: ""
	I1216 04:12:59.984655 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.984663 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:12:59.984670 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:12:59.984729 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:00.052821 2088124 cri.go:89] found id: ""
	I1216 04:13:00.052848 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.052857 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:00.052865 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:00.052942 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:00.179259 2088124 cri.go:89] found id: ""
	I1216 04:13:00.179286 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.179295 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:00.179301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:00.179374 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:00.301818 2088124 cri.go:89] found id: ""
	I1216 04:13:00.301845 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.301854 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:00.301865 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:00.301877 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:00.370430 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:00.370474 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:00.387961 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:00.387994 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:00.469934 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:00.460570    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.462248    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.463914    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.464266    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.465844    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:00.460570    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.462248    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.463914    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.464266    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.465844    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:00.470008 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:00.470035 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:00.497033 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:00.497108 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:03.031116 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:03.042155 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:03.042231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:03.067263 2088124 cri.go:89] found id: ""
	I1216 04:13:03.067286 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.067294 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:03.067300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:03.067359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:03.092385 2088124 cri.go:89] found id: ""
	I1216 04:13:03.092411 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.092421 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:03.092434 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:03.092500 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:03.121839 2088124 cri.go:89] found id: ""
	I1216 04:13:03.121866 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.121874 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:03.121881 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:03.121939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:03.145563 2088124 cri.go:89] found id: ""
	I1216 04:13:03.145591 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.145600 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:03.145606 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:03.145674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:03.173280 2088124 cri.go:89] found id: ""
	I1216 04:13:03.173308 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.173317 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:03.173324 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:03.173387 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:03.198437 2088124 cri.go:89] found id: ""
	I1216 04:13:03.198464 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.198472 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:03.198479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:03.198539 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:03.223390 2088124 cri.go:89] found id: ""
	I1216 04:13:03.223417 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.223426 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:03.223433 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:03.223492 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:03.247999 2088124 cri.go:89] found id: ""
	I1216 04:13:03.248027 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.248037 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:03.248046 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:03.248058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:03.273012 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:03.273045 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:03.309023 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:03.309054 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:03.365917 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:03.365958 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:03.379538 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:13:03.383127 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:03.383196 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:03.513399 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:03.513433 2088124 retry.go:31] will retry after 36.39416212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:03.513601 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:03.473858    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.483329    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.484155    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.485970    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.486663    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:03.473858    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.483329    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.484155    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.485970    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.486663    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:06.013933 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:06.025509 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:06.025592 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:06.052215 2088124 cri.go:89] found id: ""
	I1216 04:13:06.052240 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.052251 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:06.052258 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:06.052322 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:06.079261 2088124 cri.go:89] found id: ""
	I1216 04:13:06.079294 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.079303 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:06.079309 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:06.079373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:06.105297 2088124 cri.go:89] found id: ""
	I1216 04:13:06.105320 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.105329 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:06.105335 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:06.105394 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:06.134648 2088124 cri.go:89] found id: ""
	I1216 04:13:06.134671 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.134679 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:06.134685 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:06.134753 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:06.159604 2088124 cri.go:89] found id: ""
	I1216 04:13:06.159627 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.159635 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:06.159641 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:06.159705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:06.189283 2088124 cri.go:89] found id: ""
	I1216 04:13:06.189307 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.189315 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:06.189322 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:06.189431 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:06.214435 2088124 cri.go:89] found id: ""
	I1216 04:13:06.214469 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.214479 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:06.214486 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:06.214553 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:06.240374 2088124 cri.go:89] found id: ""
	I1216 04:13:06.240399 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.240407 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:06.240417 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:06.240465 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:06.297779 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:06.297828 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:06.314788 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:06.314817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:06.383844 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:06.374547    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.375415    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377219    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377642    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.379268    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:06.374547    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.375415    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377219    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377642    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.379268    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:06.383863 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:06.383876 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:06.409175 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:06.409211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:07.353255 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:13:07.417109 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:07.417143 2088124 retry.go:31] will retry after 43.71748827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:08.979175 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:08.990018 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:08.990104 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:09.017028 2088124 cri.go:89] found id: ""
	I1216 04:13:09.017051 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.017060 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:09.017066 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:09.017126 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:09.042381 2088124 cri.go:89] found id: ""
	I1216 04:13:09.042404 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.042413 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:09.042419 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:09.042477 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:09.071646 2088124 cri.go:89] found id: ""
	I1216 04:13:09.071670 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.071679 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:09.071685 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:09.071744 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:09.100697 2088124 cri.go:89] found id: ""
	I1216 04:13:09.100722 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.100730 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:09.100737 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:09.100797 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:09.129662 2088124 cri.go:89] found id: ""
	I1216 04:13:09.129695 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.129704 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:09.129710 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:09.129780 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:09.156770 2088124 cri.go:89] found id: ""
	I1216 04:13:09.156794 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.156802 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:09.156809 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:09.156869 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:09.182436 2088124 cri.go:89] found id: ""
	I1216 04:13:09.182458 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.182466 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:09.182472 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:09.182531 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:09.206146 2088124 cri.go:89] found id: ""
	I1216 04:13:09.206170 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.206177 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:09.206186 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:09.206198 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:09.231510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:09.231544 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:09.260226 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:09.260256 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:09.316036 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:09.316074 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:09.332123 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:09.332153 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:09.399253 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:09.390164    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.390761    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392299    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392750    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.394603    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:09.390164    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.390761    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392299    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392750    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.394603    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:11.899540 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:11.910018 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:11.910090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:11.938505 2088124 cri.go:89] found id: ""
	I1216 04:13:11.938532 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.938541 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:11.938549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:11.938611 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:11.962625 2088124 cri.go:89] found id: ""
	I1216 04:13:11.962654 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.962663 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:11.962681 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:11.962753 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:11.987471 2088124 cri.go:89] found id: ""
	I1216 04:13:11.987497 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.987506 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:11.987512 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:11.987578 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:12.016864 2088124 cri.go:89] found id: ""
	I1216 04:13:12.016892 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.016900 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:12.016907 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:12.016971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:12.042061 2088124 cri.go:89] found id: ""
	I1216 04:13:12.042088 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.042096 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:12.042102 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:12.042163 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:12.071427 2088124 cri.go:89] found id: ""
	I1216 04:13:12.071455 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.071464 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:12.071471 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:12.071533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:12.096407 2088124 cri.go:89] found id: ""
	I1216 04:13:12.096454 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.096463 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:12.096470 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:12.096529 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:12.120925 2088124 cri.go:89] found id: ""
	I1216 04:13:12.120952 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.120961 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:12.120970 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:12.120981 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:12.187317 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:12.178799    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.179379    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181098    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181645    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.183376    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:12.178799    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.179379    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181098    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181645    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.183376    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:12.187390 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:12.187411 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:12.212126 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:12.212162 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:12.243105 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:12.243134 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:12.300571 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:12.300619 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:14.817445 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:14.827746 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:14.827821 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:14.857336 2088124 cri.go:89] found id: ""
	I1216 04:13:14.857363 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.857372 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:14.857379 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:14.857446 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:14.882109 2088124 cri.go:89] found id: ""
	I1216 04:13:14.882137 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.882146 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:14.882152 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:14.882211 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:14.914132 2088124 cri.go:89] found id: ""
	I1216 04:13:14.914161 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.914171 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:14.914178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:14.914239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:14.939185 2088124 cri.go:89] found id: ""
	I1216 04:13:14.939214 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.939223 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:14.939230 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:14.939297 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:14.963568 2088124 cri.go:89] found id: ""
	I1216 04:13:14.963595 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.963604 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:14.963630 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:14.963702 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:14.988853 2088124 cri.go:89] found id: ""
	I1216 04:13:14.988880 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.988889 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:14.988895 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:14.988957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:15.018658 2088124 cri.go:89] found id: ""
	I1216 04:13:15.018685 2088124 logs.go:282] 0 containers: []
	W1216 04:13:15.018694 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:15.018701 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:15.018780 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:15.052902 2088124 cri.go:89] found id: ""
	I1216 04:13:15.052926 2088124 logs.go:282] 0 containers: []
	W1216 04:13:15.052935 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:15.052945 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:15.052956 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:15.110239 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:15.110275 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:15.126429 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:15.126498 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:15.193844 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:15.184934    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.185663    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.187427    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.188101    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.189782    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:15.184934    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.185663    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.187427    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.188101    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.189782    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:15.193874 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:15.193889 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:15.219891 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:15.219925 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:17.752258 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:17.763106 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:17.763180 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:17.789059 2088124 cri.go:89] found id: ""
	I1216 04:13:17.789084 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.789093 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:17.789099 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:17.789158 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:17.817534 2088124 cri.go:89] found id: ""
	I1216 04:13:17.817560 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.817569 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:17.817576 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:17.817637 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:17.843134 2088124 cri.go:89] found id: ""
	I1216 04:13:17.843160 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.843169 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:17.843175 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:17.843240 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:17.868379 2088124 cri.go:89] found id: ""
	I1216 04:13:17.868404 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.868414 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:17.868421 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:17.868490 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:17.893356 2088124 cri.go:89] found id: ""
	I1216 04:13:17.893384 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.893393 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:17.893400 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:17.893463 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:17.921808 2088124 cri.go:89] found id: ""
	I1216 04:13:17.921851 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.921860 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:17.921867 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:17.921928 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:17.947257 2088124 cri.go:89] found id: ""
	I1216 04:13:17.947284 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.947293 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:17.947300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:17.947367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:17.975318 2088124 cri.go:89] found id: ""
	I1216 04:13:17.975345 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.975354 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:17.975364 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:17.975375 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:18.051655 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:18.042648    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.043445    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045243    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045767    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.047547    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:18.042648    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.043445    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045243    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045767    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.047547    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:18.051680 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:18.051693 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:18.078685 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:18.078723 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:18.107761 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:18.107792 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:18.151402 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:13:18.166502 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:18.166585 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 04:13:18.219917 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:18.220071 2088124 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:20.720560 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:20.734518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:20.734605 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:20.771344 2088124 cri.go:89] found id: ""
	I1216 04:13:20.771418 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.771435 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:20.771442 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:20.771517 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:20.801470 2088124 cri.go:89] found id: ""
	I1216 04:13:20.801496 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.801505 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:20.801511 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:20.801591 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:20.826547 2088124 cri.go:89] found id: ""
	I1216 04:13:20.826620 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.826644 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:20.826663 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:20.826747 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:20.852855 2088124 cri.go:89] found id: ""
	I1216 04:13:20.852881 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.852891 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:20.852898 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:20.852986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:20.878623 2088124 cri.go:89] found id: ""
	I1216 04:13:20.878659 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.878668 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:20.878692 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:20.878808 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:20.902864 2088124 cri.go:89] found id: ""
	I1216 04:13:20.902938 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.902964 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:20.902984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:20.903181 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:20.932453 2088124 cri.go:89] found id: ""
	I1216 04:13:20.932480 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.932488 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:20.932495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:20.932552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:20.961972 2088124 cri.go:89] found id: ""
	I1216 04:13:20.962003 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.962012 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:20.962021 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:20.962046 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:21.031620 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:21.021920    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.023404    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.024377    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.025233    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.026911    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:21.021920    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.023404    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.024377    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.025233    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.026911    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:21.031656 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:21.031669 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:21.057107 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:21.057141 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:21.084165 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:21.084195 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:21.144652 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:21.144688 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:23.662474 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:23.672891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:23.672972 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:23.728294 2088124 cri.go:89] found id: ""
	I1216 04:13:23.728317 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.728325 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:23.728332 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:23.728390 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:23.774385 2088124 cri.go:89] found id: ""
	I1216 04:13:23.774414 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.774423 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:23.774429 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:23.774496 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:23.804506 2088124 cri.go:89] found id: ""
	I1216 04:13:23.804531 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.804553 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:23.804560 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:23.804618 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:23.831638 2088124 cri.go:89] found id: ""
	I1216 04:13:23.831674 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.831683 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:23.831689 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:23.831766 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:23.856129 2088124 cri.go:89] found id: ""
	I1216 04:13:23.856155 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.856164 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:23.856172 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:23.856251 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:23.884761 2088124 cri.go:89] found id: ""
	I1216 04:13:23.884787 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.884796 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:23.884803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:23.884905 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:23.913711 2088124 cri.go:89] found id: ""
	I1216 04:13:23.913736 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.913745 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:23.913752 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:23.913810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:23.938590 2088124 cri.go:89] found id: ""
	I1216 04:13:23.938616 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.938625 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:23.938635 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:23.938646 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:23.993972 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:23.994007 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:24.012474 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:24.012506 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:24.080748 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:24.071242    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.071643    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.073210    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.074640    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.075561    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:24.071242    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.071643    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.073210    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.074640    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.075561    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:24.080778 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:24.080791 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:24.110317 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:24.110357 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:26.644643 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:26.655360 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:26.655430 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:26.679082 2088124 cri.go:89] found id: ""
	I1216 04:13:26.679108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.679117 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:26.679124 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:26.679184 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:26.727361 2088124 cri.go:89] found id: ""
	I1216 04:13:26.727389 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.727399 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:26.727405 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:26.727466 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:26.784659 2088124 cri.go:89] found id: ""
	I1216 04:13:26.784688 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.784697 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:26.784703 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:26.784765 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:26.813210 2088124 cri.go:89] found id: ""
	I1216 04:13:26.813237 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.813246 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:26.813253 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:26.813336 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:26.837930 2088124 cri.go:89] found id: ""
	I1216 04:13:26.837955 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.837963 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:26.837970 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:26.838031 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:26.864344 2088124 cri.go:89] found id: ""
	I1216 04:13:26.864369 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.864378 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:26.864385 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:26.864461 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:26.889169 2088124 cri.go:89] found id: ""
	I1216 04:13:26.889195 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.889207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:26.889214 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:26.889298 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:26.913569 2088124 cri.go:89] found id: ""
	I1216 04:13:26.913596 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.913604 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:26.913614 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:26.913644 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:26.929642 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:26.929671 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:26.992130 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:26.983971    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.984717    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986365    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986828    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.988289    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:26.983971    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.984717    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986365    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986828    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.988289    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:26.992154 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:26.992166 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:27.018253 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:27.018291 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:27.047464 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:27.047492 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:29.603162 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:29.613926 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:29.614005 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:29.639664 2088124 cri.go:89] found id: ""
	I1216 04:13:29.639690 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.639700 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:29.639706 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:29.639773 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:29.664287 2088124 cri.go:89] found id: ""
	I1216 04:13:29.664313 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.664322 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:29.664328 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:29.664391 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:29.715854 2088124 cri.go:89] found id: ""
	I1216 04:13:29.715881 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.715890 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:29.715896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:29.715957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:29.775256 2088124 cri.go:89] found id: ""
	I1216 04:13:29.775283 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.775291 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:29.775298 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:29.775359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:29.800860 2088124 cri.go:89] found id: ""
	I1216 04:13:29.800884 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.800893 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:29.800899 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:29.800966 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:29.826179 2088124 cri.go:89] found id: ""
	I1216 04:13:29.826201 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.826209 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:29.826216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:29.826287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:29.851587 2088124 cri.go:89] found id: ""
	I1216 04:13:29.851657 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.851668 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:29.851675 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:29.851771 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:29.876290 2088124 cri.go:89] found id: ""
	I1216 04:13:29.876317 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.876327 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:29.876336 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:29.876351 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:29.934758 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:29.934795 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:29.950904 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:29.950934 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:30.063379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:30.052801    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.053836    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.055700    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.056432    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.058410    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:30.052801    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.053836    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.055700    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.056432    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.058410    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:30.063402 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:30.063416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:30.093513 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:30.093550 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:32.623683 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:32.634450 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:32.634522 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:32.659386 2088124 cri.go:89] found id: ""
	I1216 04:13:32.659411 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.659419 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:32.659426 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:32.659488 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:32.700370 2088124 cri.go:89] found id: ""
	I1216 04:13:32.700397 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.700406 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:32.700413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:32.700483 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:32.757584 2088124 cri.go:89] found id: ""
	I1216 04:13:32.757606 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.757615 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:32.757621 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:32.757683 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:32.803420 2088124 cri.go:89] found id: ""
	I1216 04:13:32.803445 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.803454 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:32.803460 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:32.803523 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:32.828842 2088124 cri.go:89] found id: ""
	I1216 04:13:32.828866 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.828875 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:32.828881 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:32.828949 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:32.853353 2088124 cri.go:89] found id: ""
	I1216 04:13:32.853380 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.853389 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:32.853398 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:32.853501 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:32.877408 2088124 cri.go:89] found id: ""
	I1216 04:13:32.877435 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.877444 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:32.877451 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:32.877510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:32.901743 2088124 cri.go:89] found id: ""
	I1216 04:13:32.901770 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.901780 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:32.901790 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:32.901804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:32.967369 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:32.958798    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.959484    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.960984    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.961467    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.962939    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:32.958798    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.959484    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.960984    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.961467    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.962939    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:32.967394 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:32.967408 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:32.992952 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:32.992987 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:33.022501 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:33.022532 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:33.078417 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:33.078454 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:35.594569 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:35.607352 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:35.607423 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:35.637370 2088124 cri.go:89] found id: ""
	I1216 04:13:35.637394 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.637403 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:35.637409 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:35.637468 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:35.661404 2088124 cri.go:89] found id: ""
	I1216 04:13:35.661428 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.661437 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:35.661443 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:35.661499 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:35.700087 2088124 cri.go:89] found id: ""
	I1216 04:13:35.700110 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.700118 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:35.700124 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:35.700185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:35.753090 2088124 cri.go:89] found id: ""
	I1216 04:13:35.753163 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.753187 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:35.753207 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:35.753322 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:35.783667 2088124 cri.go:89] found id: ""
	I1216 04:13:35.783693 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.783701 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:35.783707 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:35.783783 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:35.808401 2088124 cri.go:89] found id: ""
	I1216 04:13:35.808426 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.808434 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:35.808457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:35.808518 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:35.832934 2088124 cri.go:89] found id: ""
	I1216 04:13:35.833001 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.833014 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:35.833022 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:35.833080 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:35.857857 2088124 cri.go:89] found id: ""
	I1216 04:13:35.857892 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.857902 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:35.857911 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:35.857928 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:35.888212 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:35.888240 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:35.944155 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:35.944191 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:35.960968 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:35.960997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:36.037726 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:36.028639    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.029470    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.030504    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.031100    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.033315    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:36.028639    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.029470    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.030504    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.031100    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.033315    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:36.037753 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:36.037768 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:38.565516 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:38.576078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:38.576153 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:38.603519 2088124 cri.go:89] found id: ""
	I1216 04:13:38.603550 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.603564 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:38.603571 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:38.603642 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:38.630185 2088124 cri.go:89] found id: ""
	I1216 04:13:38.630212 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.630222 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:38.630228 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:38.630295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:38.656496 2088124 cri.go:89] found id: ""
	I1216 04:13:38.656518 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.656527 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:38.656532 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:38.656597 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:38.691354 2088124 cri.go:89] found id: ""
	I1216 04:13:38.691375 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.691384 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:38.691390 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:38.691448 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:38.727377 2088124 cri.go:89] found id: ""
	I1216 04:13:38.727451 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.727476 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:38.727495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:38.727607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:38.792847 2088124 cri.go:89] found id: ""
	I1216 04:13:38.792924 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.792949 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:38.792969 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:38.793082 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:38.819253 2088124 cri.go:89] found id: ""
	I1216 04:13:38.819326 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.819351 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:38.819369 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:38.819479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:38.844536 2088124 cri.go:89] found id: ""
	I1216 04:13:38.844560 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.844569 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:38.844578 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:38.844590 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:38.903226 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:38.903264 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:38.919524 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:38.919556 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:38.983586 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:38.974559    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.975311    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.976854    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.977457    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.979078    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:38.974559    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.975311    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.976854    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.977457    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.979078    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:38.983611 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:38.983625 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:39.009510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:39.009548 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:39.908601 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:13:39.971867 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:39.972017 2088124 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:41.538728 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:41.550610 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:41.550686 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:41.580358 2088124 cri.go:89] found id: ""
	I1216 04:13:41.580388 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.580398 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:41.580405 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:41.580476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:41.609251 2088124 cri.go:89] found id: ""
	I1216 04:13:41.609323 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.609346 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:41.609360 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:41.609437 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:41.634677 2088124 cri.go:89] found id: ""
	I1216 04:13:41.634714 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.634724 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:41.634731 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:41.634811 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:41.660492 2088124 cri.go:89] found id: ""
	I1216 04:13:41.660531 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.660541 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:41.660555 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:41.660624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:41.706922 2088124 cri.go:89] found id: ""
	I1216 04:13:41.706958 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.706967 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:41.706974 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:41.707062 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:41.771121 2088124 cri.go:89] found id: ""
	I1216 04:13:41.771150 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.771160 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:41.771167 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:41.771228 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:41.798371 2088124 cri.go:89] found id: ""
	I1216 04:13:41.798409 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.798418 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:41.798424 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:41.798505 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:41.825080 2088124 cri.go:89] found id: ""
	I1216 04:13:41.825108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.825118 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:41.825128 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:41.825142 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:41.881228 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:41.881264 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:41.897224 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:41.897252 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:41.962985 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:41.954556    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.955066    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.956801    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.957150    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.958760    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:41.954556    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.955066    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.956801    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.957150    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.958760    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:41.963011 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:41.963024 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:41.988969 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:41.989006 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:44.532418 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:44.542803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:44.542915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:44.568416 2088124 cri.go:89] found id: ""
	I1216 04:13:44.568439 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.568457 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:44.568463 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:44.568522 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:44.594143 2088124 cri.go:89] found id: ""
	I1216 04:13:44.594169 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.594179 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:44.594186 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:44.594247 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:44.618788 2088124 cri.go:89] found id: ""
	I1216 04:13:44.618819 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.618828 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:44.618835 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:44.618895 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:44.644302 2088124 cri.go:89] found id: ""
	I1216 04:13:44.644325 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.644333 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:44.644340 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:44.644398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:44.669819 2088124 cri.go:89] found id: ""
	I1216 04:13:44.669842 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.669849 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:44.669855 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:44.669924 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:44.725552 2088124 cri.go:89] found id: ""
	I1216 04:13:44.725575 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.725583 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:44.725589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:44.725650 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:44.765386 2088124 cri.go:89] found id: ""
	I1216 04:13:44.765408 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.765426 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:44.765432 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:44.765491 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:44.793682 2088124 cri.go:89] found id: ""
	I1216 04:13:44.793763 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.793788 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:44.793827 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:44.793857 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:44.852432 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:44.852473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:44.868492 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:44.868520 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:44.931865 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:44.923147    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.923927    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.925715    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.926197    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.927722    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:44.923147    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.923927    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.925715    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.926197    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.927722    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:44.931889 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:44.931903 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:44.957522 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:44.957557 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:47.485499 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:47.496279 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:47.496356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:47.520654 2088124 cri.go:89] found id: ""
	I1216 04:13:47.520681 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.520690 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:47.520696 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:47.520761 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:47.551944 2088124 cri.go:89] found id: ""
	I1216 04:13:47.551978 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.551987 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:47.552001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:47.552065 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:47.578411 2088124 cri.go:89] found id: ""
	I1216 04:13:47.578438 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.578450 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:47.578457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:47.578519 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:47.604018 2088124 cri.go:89] found id: ""
	I1216 04:13:47.604041 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.604049 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:47.604055 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:47.604112 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:47.629467 2088124 cri.go:89] found id: ""
	I1216 04:13:47.629491 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.629499 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:47.629506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:47.629567 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:47.658252 2088124 cri.go:89] found id: ""
	I1216 04:13:47.658280 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.658289 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:47.658295 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:47.658362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:47.683444 2088124 cri.go:89] found id: ""
	I1216 04:13:47.683472 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.683481 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:47.683487 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:47.683548 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:47.745597 2088124 cri.go:89] found id: ""
	I1216 04:13:47.745620 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.745629 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:47.745638 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:47.745650 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:47.788108 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:47.788134 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:47.844259 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:47.844292 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:47.860046 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:47.860078 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:47.931100 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:47.922584    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.923485    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.924699    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.925424    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.927031    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:47.922584    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.923485    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.924699    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.925424    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.927031    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:47.931125 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:47.931139 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:50.458157 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:50.468844 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:50.468915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:50.493698 2088124 cri.go:89] found id: ""
	I1216 04:13:50.493725 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.493735 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:50.493741 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:50.493799 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:50.518623 2088124 cri.go:89] found id: ""
	I1216 04:13:50.518652 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.518664 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:50.518671 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:50.518737 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:50.543940 2088124 cri.go:89] found id: ""
	I1216 04:13:50.543969 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.543978 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:50.543984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:50.544043 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:50.570246 2088124 cri.go:89] found id: ""
	I1216 04:13:50.570283 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.570292 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:50.570299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:50.570374 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:50.596855 2088124 cri.go:89] found id: ""
	I1216 04:13:50.596884 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.596893 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:50.596900 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:50.596965 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:50.622325 2088124 cri.go:89] found id: ""
	I1216 04:13:50.622352 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.622361 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:50.622368 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:50.622428 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:50.647658 2088124 cri.go:89] found id: ""
	I1216 04:13:50.647683 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.647691 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:50.647698 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:50.647760 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:50.672119 2088124 cri.go:89] found id: ""
	I1216 04:13:50.672156 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.672166 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:50.672176 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:50.672187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:50.741830 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:50.741871 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:50.758886 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:50.758917 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:50.843759 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:50.834975    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.835406    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837202    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837673    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.839159    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:50.834975    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.835406    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837202    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837673    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.839159    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:50.843782 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:50.843795 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:50.870242 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:50.870278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:51.134849 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:13:51.199925 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:51.200071 2088124 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:51.205122 2088124 out.go:179] * Enabled addons: 
	I1216 04:13:51.208001 2088124 addons.go:530] duration metric: took 1m54.35940748s for enable addons: enabled=[]
	I1216 04:13:53.399835 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:53.410221 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:53.410292 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:53.442995 2088124 cri.go:89] found id: ""
	I1216 04:13:53.443019 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.443028 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:53.443034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:53.443119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:53.469085 2088124 cri.go:89] found id: ""
	I1216 04:13:53.469108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.469116 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:53.469122 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:53.469185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:53.492673 2088124 cri.go:89] found id: ""
	I1216 04:13:53.492741 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.492764 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:53.492778 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:53.492851 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:53.519461 2088124 cri.go:89] found id: ""
	I1216 04:13:53.519484 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.519493 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:53.519499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:53.519559 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:53.544555 2088124 cri.go:89] found id: ""
	I1216 04:13:53.544578 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.544587 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:53.544593 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:53.544655 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:53.570476 2088124 cri.go:89] found id: ""
	I1216 04:13:53.570499 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.570508 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:53.570514 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:53.570576 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:53.598792 2088124 cri.go:89] found id: ""
	I1216 04:13:53.598814 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.598822 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:53.598828 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:53.598894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:53.627454 2088124 cri.go:89] found id: ""
	I1216 04:13:53.627477 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.627485 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:53.627494 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:53.627505 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:53.684461 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:53.684541 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:53.709962 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:53.710041 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:53.803419 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:53.793865    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.794942    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796672    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796978    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.798420    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:53.793865    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.794942    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796672    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796978    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.798420    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:53.803444 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:53.803462 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:53.829615 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:53.829652 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:56.358195 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:56.368722 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:56.368794 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:56.393334 2088124 cri.go:89] found id: ""
	I1216 04:13:56.393358 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.393367 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:56.393373 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:56.393440 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:56.417912 2088124 cri.go:89] found id: ""
	I1216 04:13:56.417935 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.417944 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:56.417983 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:56.418062 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:56.445420 2088124 cri.go:89] found id: ""
	I1216 04:13:56.445451 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.445461 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:56.445467 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:56.445526 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:56.469454 2088124 cri.go:89] found id: ""
	I1216 04:13:56.469478 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.469487 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:56.469493 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:56.469552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:56.494121 2088124 cri.go:89] found id: ""
	I1216 04:13:56.494145 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.494153 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:56.494165 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:56.494225 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:56.517578 2088124 cri.go:89] found id: ""
	I1216 04:13:56.517602 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.517611 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:56.517637 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:56.517700 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:56.544866 2088124 cri.go:89] found id: ""
	I1216 04:13:56.544891 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.544899 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:56.544941 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:56.545022 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:56.573759 2088124 cri.go:89] found id: ""
	I1216 04:13:56.573787 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.573796 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:56.573805 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:56.573817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:56.599163 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:56.599202 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:56.630921 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:56.630948 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:56.688477 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:56.688553 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:56.720603 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:56.720634 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:56.828200 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:56.818507    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.819366    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821131    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821683    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.823608    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:56.818507    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.819366    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821131    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821683    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.823608    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:59.328466 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:59.339589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:59.339664 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:59.364346 2088124 cri.go:89] found id: ""
	I1216 04:13:59.364373 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.364382 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:59.364389 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:59.364494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:59.393412 2088124 cri.go:89] found id: ""
	I1216 04:13:59.393480 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.393503 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:59.393516 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:59.393590 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:59.422012 2088124 cri.go:89] found id: ""
	I1216 04:13:59.422039 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.422048 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:59.422055 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:59.422111 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:59.447252 2088124 cri.go:89] found id: ""
	I1216 04:13:59.447280 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.447289 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:59.447301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:59.447362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:59.473224 2088124 cri.go:89] found id: ""
	I1216 04:13:59.473253 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.473262 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:59.473269 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:59.473333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:59.498117 2088124 cri.go:89] found id: ""
	I1216 04:13:59.498142 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.498151 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:59.498157 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:59.498218 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:59.531960 2088124 cri.go:89] found id: ""
	I1216 04:13:59.531983 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.531992 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:59.531998 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:59.532064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:59.555530 2088124 cri.go:89] found id: ""
	I1216 04:13:59.555557 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.555567 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:59.555586 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:59.555597 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:59.587567 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:59.587594 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:59.642770 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:59.642808 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:59.658670 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:59.658698 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:59.758071 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:59.747797    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.748997    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.749964    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.751637    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.752201    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:59.747797    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.748997    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.749964    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.751637    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.752201    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:59.758096 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:59.758109 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:02.297267 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:02.308025 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:02.308094 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:02.332912 2088124 cri.go:89] found id: ""
	I1216 04:14:02.332938 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.332947 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:02.332953 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:02.333015 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:02.358723 2088124 cri.go:89] found id: ""
	I1216 04:14:02.358746 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.358754 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:02.358760 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:02.358820 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:02.384845 2088124 cri.go:89] found id: ""
	I1216 04:14:02.384869 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.384878 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:02.384884 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:02.384947 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:02.411300 2088124 cri.go:89] found id: ""
	I1216 04:14:02.411327 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.411337 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:02.411343 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:02.411401 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:02.436448 2088124 cri.go:89] found id: ""
	I1216 04:14:02.436490 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.436500 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:02.436506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:02.436568 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:02.462003 2088124 cri.go:89] found id: ""
	I1216 04:14:02.462030 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.462039 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:02.462045 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:02.462115 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:02.487374 2088124 cri.go:89] found id: ""
	I1216 04:14:02.487398 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.487407 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:02.487414 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:02.487473 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:02.513515 2088124 cri.go:89] found id: ""
	I1216 04:14:02.513541 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.513549 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:02.513559 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:02.513574 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:02.569398 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:02.569439 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:02.585943 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:02.585986 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:02.652956 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:02.644316    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.645186    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.646890    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.647275    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.648908    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:02.644316    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.645186    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.646890    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.647275    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.648908    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:02.653021 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:02.653040 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:02.678261 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:02.678296 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:05.269784 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:05.280500 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:05.280584 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:05.305398 2088124 cri.go:89] found id: ""
	I1216 04:14:05.305424 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.305432 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:05.305439 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:05.305498 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:05.331233 2088124 cri.go:89] found id: ""
	I1216 04:14:05.331256 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.331264 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:05.331270 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:05.331329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:05.356501 2088124 cri.go:89] found id: ""
	I1216 04:14:05.356527 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.356537 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:05.356543 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:05.356605 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:05.383678 2088124 cri.go:89] found id: ""
	I1216 04:14:05.383706 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.383714 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:05.383720 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:05.383819 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:05.408800 2088124 cri.go:89] found id: ""
	I1216 04:14:05.408826 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.408835 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:05.408842 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:05.408900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:05.437636 2088124 cri.go:89] found id: ""
	I1216 04:14:05.437664 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.437673 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:05.437680 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:05.437738 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:05.463588 2088124 cri.go:89] found id: ""
	I1216 04:14:05.463619 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.463628 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:05.463635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:05.463707 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:05.492371 2088124 cri.go:89] found id: ""
	I1216 04:14:05.492399 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.492409 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:05.492418 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:05.492430 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:05.548250 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:05.548287 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:05.564063 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:05.564088 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:05.632904 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:05.624351    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.625146    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.626904    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.627499    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.629025    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:05.624351    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.625146    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.626904    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.627499    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.629025    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:05.632926 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:05.632939 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:05.659343 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:05.659376 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:08.201168 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:08.211739 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:08.211822 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:08.236069 2088124 cri.go:89] found id: ""
	I1216 04:14:08.236097 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.236106 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:08.236118 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:08.236177 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:08.261051 2088124 cri.go:89] found id: ""
	I1216 04:14:08.261075 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.261083 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:08.261089 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:08.261150 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:08.285569 2088124 cri.go:89] found id: ""
	I1216 04:14:08.285592 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.285600 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:08.285606 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:08.285667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:08.311218 2088124 cri.go:89] found id: ""
	I1216 04:14:08.311258 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.311266 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:08.311273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:08.311366 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:08.345673 2088124 cri.go:89] found id: ""
	I1216 04:14:08.345697 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.345706 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:08.345713 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:08.345776 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:08.370418 2088124 cri.go:89] found id: ""
	I1216 04:14:08.370441 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.370449 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:08.370456 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:08.370513 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:08.395107 2088124 cri.go:89] found id: ""
	I1216 04:14:08.395170 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.395196 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:08.395215 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:08.395299 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:08.419032 2088124 cri.go:89] found id: ""
	I1216 04:14:08.419085 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.419094 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:08.419104 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:08.419115 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:08.475411 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:08.475448 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:08.491357 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:08.491391 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:08.557388 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:08.548702    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.549457    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551263    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551890    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.553470    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:08.548702    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.549457    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551263    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551890    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.553470    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:08.557412 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:08.557426 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:08.582743 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:08.582777 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:11.111145 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:11.123009 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:11.123095 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:11.150909 2088124 cri.go:89] found id: ""
	I1216 04:14:11.150934 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.150942 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:11.150949 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:11.151075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:11.182574 2088124 cri.go:89] found id: ""
	I1216 04:14:11.182600 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.182610 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:11.182616 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:11.182719 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:11.208283 2088124 cri.go:89] found id: ""
	I1216 04:14:11.208310 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.208319 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:11.208325 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:11.208417 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:11.237024 2088124 cri.go:89] found id: ""
	I1216 04:14:11.237052 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.237061 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:11.237069 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:11.237132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:11.265167 2088124 cri.go:89] found id: ""
	I1216 04:14:11.265189 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.265197 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:11.265203 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:11.265261 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:11.290122 2088124 cri.go:89] found id: ""
	I1216 04:14:11.290144 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.290152 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:11.290159 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:11.290217 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:11.317188 2088124 cri.go:89] found id: ""
	I1216 04:14:11.317211 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.317219 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:11.317225 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:11.317304 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:11.342140 2088124 cri.go:89] found id: ""
	I1216 04:14:11.342164 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.342173 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:11.342206 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:11.342225 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:11.368021 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:11.368058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:11.397287 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:11.397318 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:11.453124 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:11.453158 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:11.468881 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:11.468910 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:11.535360 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:11.526322    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.526895    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.528582    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.529258    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.530769    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:11.526322    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.526895    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.528582    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.529258    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.530769    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:14.036278 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:14.046954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:14.047104 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:14.072898 2088124 cri.go:89] found id: ""
	I1216 04:14:14.072923 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.072932 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:14.072938 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:14.072998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:14.098005 2088124 cri.go:89] found id: ""
	I1216 04:14:14.098041 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.098049 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:14.098056 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:14.098123 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:14.125919 2088124 cri.go:89] found id: ""
	I1216 04:14:14.125945 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.125954 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:14.125961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:14.126068 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:14.151392 2088124 cri.go:89] found id: ""
	I1216 04:14:14.151416 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.151424 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:14.151430 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:14.151494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:14.181023 2088124 cri.go:89] found id: ""
	I1216 04:14:14.181054 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.181064 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:14.181070 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:14.181139 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:14.206141 2088124 cri.go:89] found id: ""
	I1216 04:14:14.206166 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.206175 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:14.206181 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:14.206250 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:14.230051 2088124 cri.go:89] found id: ""
	I1216 04:14:14.230084 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.230093 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:14.230098 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:14.230183 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:14.255362 2088124 cri.go:89] found id: ""
	I1216 04:14:14.255388 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.255412 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:14.255423 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:14.255434 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:14.310536 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:14.310573 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:14.326390 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:14.326478 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:14.389470 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:14.380411    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.381325    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383077    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383571    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.385261    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:14.380411    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.381325    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383077    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383571    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.385261    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:14.389493 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:14.389512 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:14.415767 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:14.415804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:16.946959 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:16.978797 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:16.978873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:17.023928 2088124 cri.go:89] found id: ""
	I1216 04:14:17.024005 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.024022 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:17.024030 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:17.024092 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:17.049994 2088124 cri.go:89] found id: ""
	I1216 04:14:17.050024 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.050033 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:17.050040 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:17.050122 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:17.075095 2088124 cri.go:89] found id: ""
	I1216 04:14:17.075120 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.075128 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:17.075134 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:17.075195 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:17.103161 2088124 cri.go:89] found id: ""
	I1216 04:14:17.103189 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.103209 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:17.103216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:17.103687 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:17.139217 2088124 cri.go:89] found id: ""
	I1216 04:14:17.139246 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.139255 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:17.139261 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:17.139325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:17.170063 2088124 cri.go:89] found id: ""
	I1216 04:14:17.170091 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.170102 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:17.170108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:17.170186 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:17.195843 2088124 cri.go:89] found id: ""
	I1216 04:14:17.195869 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.195879 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:17.195885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:17.195966 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:17.221935 2088124 cri.go:89] found id: ""
	I1216 04:14:17.221962 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.221971 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:17.222001 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:17.222019 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:17.278612 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:17.278650 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:17.295004 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:17.295076 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:17.359742 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:17.351174    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.351977    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.353668    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.354101    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.355803    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:17.351174    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.351977    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.353668    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.354101    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.355803    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:17.359766 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:17.359779 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:17.385281 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:17.385316 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:19.913504 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:19.924126 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:19.924223 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:19.981102 2088124 cri.go:89] found id: ""
	I1216 04:14:19.981182 2088124 logs.go:282] 0 containers: []
	W1216 04:14:19.981204 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:19.981223 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:19.981319 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:20.025801 2088124 cri.go:89] found id: ""
	I1216 04:14:20.025875 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.025897 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:20.025918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:20.026010 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:20.057062 2088124 cri.go:89] found id: ""
	I1216 04:14:20.057088 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.057097 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:20.057103 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:20.057168 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:20.082749 2088124 cri.go:89] found id: ""
	I1216 04:14:20.082774 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.082783 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:20.082790 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:20.082854 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:20.109626 2088124 cri.go:89] found id: ""
	I1216 04:14:20.109653 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.109663 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:20.109670 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:20.109731 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:20.134934 2088124 cri.go:89] found id: ""
	I1216 04:14:20.134957 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.134980 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:20.134988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:20.135088 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:20.161170 2088124 cri.go:89] found id: ""
	I1216 04:14:20.161197 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.161206 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:20.161213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:20.161299 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:20.187553 2088124 cri.go:89] found id: ""
	I1216 04:14:20.187578 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.187587 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:20.187597 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:20.187629 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:20.255987 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:20.246960    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.247520    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.249493    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.250032    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.251482    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:20.246960    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.247520    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.249493    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.250032    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.251482    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:20.256011 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:20.256024 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:20.281257 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:20.281331 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:20.310693 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:20.310724 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:20.367395 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:20.367436 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:22.883831 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:22.894924 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:22.894999 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:22.920332 2088124 cri.go:89] found id: ""
	I1216 04:14:22.920359 2088124 logs.go:282] 0 containers: []
	W1216 04:14:22.920379 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:22.920386 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:22.920445 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:22.977215 2088124 cri.go:89] found id: ""
	I1216 04:14:22.977243 2088124 logs.go:282] 0 containers: []
	W1216 04:14:22.977252 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:22.977258 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:22.977317 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:23.028698 2088124 cri.go:89] found id: ""
	I1216 04:14:23.028723 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.028732 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:23.028739 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:23.028804 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:23.055098 2088124 cri.go:89] found id: ""
	I1216 04:14:23.055124 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.055133 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:23.055140 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:23.055209 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:23.080450 2088124 cri.go:89] found id: ""
	I1216 04:14:23.080483 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.080493 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:23.080499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:23.080559 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:23.105251 2088124 cri.go:89] found id: ""
	I1216 04:14:23.105275 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.105284 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:23.105296 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:23.105355 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:23.130544 2088124 cri.go:89] found id: ""
	I1216 04:14:23.130573 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.130588 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:23.130594 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:23.130653 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:23.155787 2088124 cri.go:89] found id: ""
	I1216 04:14:23.155863 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.155879 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:23.155889 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:23.155901 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:23.184285 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:23.184315 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:23.240021 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:23.240058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:23.255934 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:23.255969 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:23.324390 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:23.316382    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.316885    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.318697    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.319197    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.320361    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:23.316382    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.316885    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.318697    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.319197    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.320361    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:23.324415 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:23.324432 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:25.850349 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:25.861084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:25.861157 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:25.885912 2088124 cri.go:89] found id: ""
	I1216 04:14:25.885939 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.885947 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:25.885954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:25.886015 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:25.914385 2088124 cri.go:89] found id: ""
	I1216 04:14:25.914408 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.914416 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:25.914422 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:25.914482 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:25.957379 2088124 cri.go:89] found id: ""
	I1216 04:14:25.957406 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.957415 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:25.957421 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:25.957480 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:26.020008 2088124 cri.go:89] found id: ""
	I1216 04:14:26.020036 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.020045 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:26.020051 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:26.020118 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:26.047424 2088124 cri.go:89] found id: ""
	I1216 04:14:26.047452 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.047461 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:26.047468 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:26.047534 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:26.073161 2088124 cri.go:89] found id: ""
	I1216 04:14:26.073187 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.073208 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:26.073216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:26.073277 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:26.103238 2088124 cri.go:89] found id: ""
	I1216 04:14:26.103260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.103268 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:26.103274 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:26.103337 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:26.128964 2088124 cri.go:89] found id: ""
	I1216 04:14:26.128993 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.129004 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:26.129013 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:26.129025 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:26.185309 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:26.185350 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:26.201116 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:26.201191 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:26.261346 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:26.253589    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.254367    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255430    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255945    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.257599    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:26.253589    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.254367    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255430    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255945    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.257599    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:26.261367 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:26.261379 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:26.286659 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:26.286693 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:28.816260 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:28.826799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:28.826873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:28.851396 2088124 cri.go:89] found id: ""
	I1216 04:14:28.851425 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.851435 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:28.851441 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:28.851503 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:28.875518 2088124 cri.go:89] found id: ""
	I1216 04:14:28.875541 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.875550 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:28.875556 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:28.875614 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:28.904430 2088124 cri.go:89] found id: ""
	I1216 04:14:28.904454 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.904462 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:28.904476 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:28.904537 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:28.929129 2088124 cri.go:89] found id: ""
	I1216 04:14:28.929153 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.929162 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:28.929169 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:28.929228 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:28.966014 2088124 cri.go:89] found id: ""
	I1216 04:14:28.966042 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.966051 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:28.966057 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:28.966123 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:29.025945 2088124 cri.go:89] found id: ""
	I1216 04:14:29.025972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.025988 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:29.025995 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:29.026064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:29.051899 2088124 cri.go:89] found id: ""
	I1216 04:14:29.051935 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.051946 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:29.051952 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:29.052023 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:29.080317 2088124 cri.go:89] found id: ""
	I1216 04:14:29.080341 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.080351 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:29.080361 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:29.080373 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:29.135930 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:29.135967 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:29.154187 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:29.154216 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:29.221073 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:29.212783    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.213403    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.214843    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.215170    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.216622    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:29.212783    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.213403    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.214843    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.215170    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.216622    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:29.221096 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:29.221111 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:29.246641 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:29.246676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:31.779202 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:31.790954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:31.791029 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:31.817812 2088124 cri.go:89] found id: ""
	I1216 04:14:31.817897 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.817925 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:31.817946 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:31.818067 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:31.842726 2088124 cri.go:89] found id: ""
	I1216 04:14:31.842753 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.842762 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:31.842769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:31.842832 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:31.868497 2088124 cri.go:89] found id: ""
	I1216 04:14:31.868523 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.868532 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:31.868538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:31.868602 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:31.898624 2088124 cri.go:89] found id: ""
	I1216 04:14:31.898646 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.898655 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:31.898662 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:31.898720 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:31.924967 2088124 cri.go:89] found id: ""
	I1216 04:14:31.924993 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.925003 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:31.925011 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:31.925074 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:31.966946 2088124 cri.go:89] found id: ""
	I1216 04:14:31.966972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.966981 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:31.966988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:31.967075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:31.999136 2088124 cri.go:89] found id: ""
	I1216 04:14:31.999162 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.999170 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:31.999177 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:31.999248 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:32.037224 2088124 cri.go:89] found id: ""
	I1216 04:14:32.037260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:32.037269 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:32.037280 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:32.037292 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:32.098221 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:32.098257 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:32.114315 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:32.114346 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:32.179522 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:32.170571    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.171240    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.172913    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.173537    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.175422    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:32.170571    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.171240    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.172913    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.173537    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.175422    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:32.179546 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:32.179598 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:32.205901 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:32.205937 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:34.736487 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:34.747033 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:34.747125 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:34.774783 2088124 cri.go:89] found id: ""
	I1216 04:14:34.774808 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.774817 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:34.774826 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:34.774892 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:34.804248 2088124 cri.go:89] found id: ""
	I1216 04:14:34.804272 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.804281 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:34.804294 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:34.804356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:34.829461 2088124 cri.go:89] found id: ""
	I1216 04:14:34.829485 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.829493 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:34.829499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:34.829560 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:34.857116 2088124 cri.go:89] found id: ""
	I1216 04:14:34.857141 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.857151 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:34.857157 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:34.857219 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:34.882336 2088124 cri.go:89] found id: ""
	I1216 04:14:34.882359 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.882367 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:34.882373 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:34.882434 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:34.907931 2088124 cri.go:89] found id: ""
	I1216 04:14:34.907954 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.907962 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:34.907969 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:34.908027 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:34.956047 2088124 cri.go:89] found id: ""
	I1216 04:14:34.956069 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.956077 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:34.956084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:34.956145 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:35.024159 2088124 cri.go:89] found id: ""
	I1216 04:14:35.024183 2088124 logs.go:282] 0 containers: []
	W1216 04:14:35.024197 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:35.024207 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:35.024218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:35.052560 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:35.052632 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:35.120169 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:35.109992    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.110996    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.112972    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.113360    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.115151    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:35.109992    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.110996    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.112972    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.113360    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.115151    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:35.120193 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:35.120206 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:35.148539 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:35.148572 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:35.177137 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:35.177163 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:37.736828 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:37.748034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:37.748119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:37.774072 2088124 cri.go:89] found id: ""
	I1216 04:14:37.774096 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.774105 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:37.774113 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:37.774174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:37.798854 2088124 cri.go:89] found id: ""
	I1216 04:14:37.798879 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.798887 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:37.798893 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:37.798953 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:37.824863 2088124 cri.go:89] found id: ""
	I1216 04:14:37.824889 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.824898 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:37.824905 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:37.824995 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:37.849318 2088124 cri.go:89] found id: ""
	I1216 04:14:37.849340 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.849348 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:37.849354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:37.849418 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:37.874246 2088124 cri.go:89] found id: ""
	I1216 04:14:37.874269 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.874277 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:37.874285 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:37.874343 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:37.900978 2088124 cri.go:89] found id: ""
	I1216 04:14:37.901002 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.901010 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:37.901016 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:37.901076 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:37.929331 2088124 cri.go:89] found id: ""
	I1216 04:14:37.929360 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.929370 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:37.929376 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:37.929440 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:37.969527 2088124 cri.go:89] found id: ""
	I1216 04:14:37.969556 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.969564 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:37.969573 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:37.969585 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:38.009528 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:38.009566 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:38.055850 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:38.055880 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:38.113260 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:38.113301 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:38.129810 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:38.129846 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:38.195392 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:38.187231    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.187971    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.189569    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.190023    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.191590    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:38.187231    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.187971    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.189569    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.190023    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.191590    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:40.695695 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:40.706489 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:40.706566 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:40.733370 2088124 cri.go:89] found id: ""
	I1216 04:14:40.733400 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.733409 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:40.733416 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:40.733476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:40.760997 2088124 cri.go:89] found id: ""
	I1216 04:14:40.761027 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.761037 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:40.761043 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:40.761106 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:40.785757 2088124 cri.go:89] found id: ""
	I1216 04:14:40.785785 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.785793 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:40.785799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:40.785859 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:40.810917 2088124 cri.go:89] found id: ""
	I1216 04:14:40.810946 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.810954 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:40.810961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:40.811021 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:40.837261 2088124 cri.go:89] found id: ""
	I1216 04:14:40.837289 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.837298 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:40.837306 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:40.837367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:40.865095 2088124 cri.go:89] found id: ""
	I1216 04:14:40.865124 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.865133 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:40.865139 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:40.865197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:40.893132 2088124 cri.go:89] found id: ""
	I1216 04:14:40.893156 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.893164 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:40.893170 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:40.893230 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:40.917368 2088124 cri.go:89] found id: ""
	I1216 04:14:40.917390 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.917398 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:40.917407 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:40.917418 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:40.988706 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:40.988789 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:41.026114 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:41.026141 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:41.097192 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:41.088410    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.089258    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.090961    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.091663    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.093296    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:41.088410    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.089258    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.090961    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.091663    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.093296    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:41.097218 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:41.097232 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:41.122894 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:41.122929 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:43.655609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:43.666076 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:43.666148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:43.697517 2088124 cri.go:89] found id: ""
	I1216 04:14:43.697542 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.697550 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:43.697557 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:43.697617 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:43.722700 2088124 cri.go:89] found id: ""
	I1216 04:14:43.722727 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.722737 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:43.722743 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:43.722811 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:43.751469 2088124 cri.go:89] found id: ""
	I1216 04:14:43.751496 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.751509 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:43.751516 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:43.751577 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:43.776779 2088124 cri.go:89] found id: ""
	I1216 04:14:43.776804 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.776812 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:43.776818 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:43.776876 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:43.801004 2088124 cri.go:89] found id: ""
	I1216 04:14:43.801028 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.801037 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:43.801044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:43.801131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:43.825723 2088124 cri.go:89] found id: ""
	I1216 04:14:43.825747 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.825756 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:43.825763 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:43.825823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:43.854440 2088124 cri.go:89] found id: ""
	I1216 04:14:43.854464 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.854473 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:43.854479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:43.854537 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:43.881228 2088124 cri.go:89] found id: ""
	I1216 04:14:43.881251 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.881261 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:43.881270 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:43.881282 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:43.908258 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:43.908330 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:43.975235 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:43.975273 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:44.032765 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:44.032798 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:44.097769 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:44.088888    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.089678    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.091397    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.092058    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.093573    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:44.088888    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.089678    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.091397    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.092058    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.093573    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:44.097791 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:44.097814 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:46.624214 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:46.634860 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:46.634939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:46.662490 2088124 cri.go:89] found id: ""
	I1216 04:14:46.662518 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.662528 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:46.662534 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:46.662598 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:46.687532 2088124 cri.go:89] found id: ""
	I1216 04:14:46.687558 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.687567 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:46.687574 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:46.687639 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:46.711951 2088124 cri.go:89] found id: ""
	I1216 04:14:46.711978 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.711988 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:46.711994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:46.712054 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:46.742207 2088124 cri.go:89] found id: ""
	I1216 04:14:46.742241 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.742250 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:46.742257 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:46.742331 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:46.766943 2088124 cri.go:89] found id: ""
	I1216 04:14:46.766972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.766981 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:46.766988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:46.767070 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:46.792400 2088124 cri.go:89] found id: ""
	I1216 04:14:46.792432 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.792442 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:46.792455 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:46.792533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:46.817511 2088124 cri.go:89] found id: ""
	I1216 04:14:46.817533 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.817542 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:46.817548 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:46.817610 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:46.845432 2088124 cri.go:89] found id: ""
	I1216 04:14:46.845455 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.845464 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:46.845473 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:46.845484 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:46.901017 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:46.901050 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:46.916980 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:46.917012 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:47.034196 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:47.019727    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.020515    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022325    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022651    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.026596    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:47.019727    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.020515    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022325    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022651    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.026596    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:47.034216 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:47.034230 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:47.060131 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:47.060167 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:49.592378 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:49.603274 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:49.603390 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:49.628592 2088124 cri.go:89] found id: ""
	I1216 04:14:49.628617 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.628626 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:49.628632 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:49.628693 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:49.654951 2088124 cri.go:89] found id: ""
	I1216 04:14:49.654974 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.654983 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:49.654990 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:49.655079 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:49.680966 2088124 cri.go:89] found id: ""
	I1216 04:14:49.680992 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.681004 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:49.681011 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:49.681077 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:49.705520 2088124 cri.go:89] found id: ""
	I1216 04:14:49.705549 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.705558 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:49.705565 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:49.705624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:49.735615 2088124 cri.go:89] found id: ""
	I1216 04:14:49.735643 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.735653 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:49.735660 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:49.735723 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:49.761693 2088124 cri.go:89] found id: ""
	I1216 04:14:49.761721 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.761730 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:49.761736 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:49.761799 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:49.786810 2088124 cri.go:89] found id: ""
	I1216 04:14:49.786852 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.786866 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:49.786875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:49.786943 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:49.815183 2088124 cri.go:89] found id: ""
	I1216 04:14:49.815209 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.815218 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:49.815236 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:49.815247 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:49.870316 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:49.870351 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:49.886698 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:49.886724 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:50.017086 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:49.989874    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.000829    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.004526    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.006532    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.011272    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:49.989874    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.000829    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.004526    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.006532    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.011272    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:50.017115 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:50.017137 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:50.046781 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:50.046822 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:52.580326 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:52.591108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:52.591184 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:52.619853 2088124 cri.go:89] found id: ""
	I1216 04:14:52.619876 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.619884 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:52.619891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:52.619973 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:52.644168 2088124 cri.go:89] found id: ""
	I1216 04:14:52.644191 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.644199 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:52.644205 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:52.644266 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:52.669818 2088124 cri.go:89] found id: ""
	I1216 04:14:52.669842 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.669850 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:52.669856 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:52.669916 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:52.695228 2088124 cri.go:89] found id: ""
	I1216 04:14:52.695252 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.695260 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:52.695267 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:52.695329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:52.720235 2088124 cri.go:89] found id: ""
	I1216 04:14:52.720260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.720269 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:52.720275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:52.720339 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:52.749551 2088124 cri.go:89] found id: ""
	I1216 04:14:52.749574 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.749582 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:52.749589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:52.749651 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:52.776351 2088124 cri.go:89] found id: ""
	I1216 04:14:52.776375 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.776383 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:52.776389 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:52.776450 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:52.805147 2088124 cri.go:89] found id: ""
	I1216 04:14:52.805175 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.805185 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:52.805195 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:52.805211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:52.831059 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:52.831098 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:52.861113 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:52.861143 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:52.916847 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:52.916883 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:52.933489 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:52.933517 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:53.043697 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:53.034203    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.035135    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.036921    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.037478    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.039232    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:53.034203    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.035135    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.036921    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.037478    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.039232    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:55.544026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:55.554861 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:55.554956 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:55.578474 2088124 cri.go:89] found id: ""
	I1216 04:14:55.578502 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.578511 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:55.578518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:55.578633 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:55.602756 2088124 cri.go:89] found id: ""
	I1216 04:14:55.602795 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.602804 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:55.602811 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:55.602900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:55.633011 2088124 cri.go:89] found id: ""
	I1216 04:14:55.633035 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.633043 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:55.633049 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:55.633136 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:55.658213 2088124 cri.go:89] found id: ""
	I1216 04:14:55.658247 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.658257 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:55.658280 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:55.658411 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:55.683154 2088124 cri.go:89] found id: ""
	I1216 04:14:55.683183 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.683201 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:55.683208 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:55.683280 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:55.707894 2088124 cri.go:89] found id: ""
	I1216 04:14:55.707968 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.707991 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:55.708010 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:55.708099 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:55.732419 2088124 cri.go:89] found id: ""
	I1216 04:14:55.732506 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.732531 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:55.732543 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:55.732624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:55.760911 2088124 cri.go:89] found id: ""
	I1216 04:14:55.760981 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.761007 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:55.761023 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:55.761038 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:55.817437 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:55.817473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:55.833374 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:55.833405 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:55.898151 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:55.890310    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.890838    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892319    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892862    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.894354    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:55.890310    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.890838    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892319    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892862    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.894354    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:55.898175 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:55.898195 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:55.923776 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:55.923810 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:58.462512 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:58.474113 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:58.474190 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:58.500558 2088124 cri.go:89] found id: ""
	I1216 04:14:58.500581 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.500590 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:58.500597 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:58.500659 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:58.525784 2088124 cri.go:89] found id: ""
	I1216 04:14:58.525809 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.525818 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:58.525824 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:58.525883 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:58.550534 2088124 cri.go:89] found id: ""
	I1216 04:14:58.550560 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.550570 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:58.550577 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:58.550634 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:58.577140 2088124 cri.go:89] found id: ""
	I1216 04:14:58.577167 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.577177 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:58.577184 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:58.577244 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:58.605864 2088124 cri.go:89] found id: ""
	I1216 04:14:58.605890 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.605904 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:58.605911 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:58.605975 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:58.634121 2088124 cri.go:89] found id: ""
	I1216 04:14:58.634152 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.634161 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:58.634168 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:58.634239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:58.660170 2088124 cri.go:89] found id: ""
	I1216 04:14:58.660198 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.660207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:58.660213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:58.660273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:58.685306 2088124 cri.go:89] found id: ""
	I1216 04:14:58.685333 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.685342 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:58.685351 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:58.685364 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:58.741326 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:58.741362 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:58.757562 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:58.757594 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:58.823813 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:58.815321    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.815799    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817390    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817823    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.819361    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:58.815321    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.815799    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817390    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817823    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.819361    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:58.823838 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:58.823854 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:58.849684 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:58.849722 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:01.379834 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:01.391065 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:01.391142 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:01.417501 2088124 cri.go:89] found id: ""
	I1216 04:15:01.417579 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.417602 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:01.417643 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:01.417737 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:01.448334 2088124 cri.go:89] found id: ""
	I1216 04:15:01.448360 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.448368 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:01.448375 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:01.448447 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:01.476980 2088124 cri.go:89] found id: ""
	I1216 04:15:01.477006 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.477015 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:01.477022 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:01.477108 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:01.501087 2088124 cri.go:89] found id: ""
	I1216 04:15:01.501110 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.501118 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:01.501125 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:01.501183 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:01.526116 2088124 cri.go:89] found id: ""
	I1216 04:15:01.526139 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.526147 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:01.526154 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:01.526217 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:01.552211 2088124 cri.go:89] found id: ""
	I1216 04:15:01.552234 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.552249 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:01.552255 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:01.552314 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:01.579190 2088124 cri.go:89] found id: ""
	I1216 04:15:01.579220 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.579229 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:01.579243 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:01.579362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:01.606084 2088124 cri.go:89] found id: ""
	I1216 04:15:01.606108 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.606118 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:01.606127 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:01.606139 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:01.638251 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:01.638281 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:01.698103 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:01.698145 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:01.714771 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:01.714858 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:01.780079 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:01.771727    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.772399    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774006    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774479    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.776016    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:01.771727    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.772399    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774006    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774479    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.776016    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:01.780150 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:01.780177 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:04.307354 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:04.318980 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:04.319082 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:04.348465 2088124 cri.go:89] found id: ""
	I1216 04:15:04.348496 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.348506 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:04.348513 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:04.348593 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:04.374442 2088124 cri.go:89] found id: ""
	I1216 04:15:04.374467 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.374476 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:04.374485 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:04.374543 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:04.401352 2088124 cri.go:89] found id: ""
	I1216 04:15:04.401376 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.401384 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:04.401390 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:04.401448 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:04.427946 2088124 cri.go:89] found id: ""
	I1216 04:15:04.427969 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.427978 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:04.427984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:04.428044 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:04.453439 2088124 cri.go:89] found id: ""
	I1216 04:15:04.453474 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.453483 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:04.453490 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:04.453549 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:04.478368 2088124 cri.go:89] found id: ""
	I1216 04:15:04.478395 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.478403 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:04.478409 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:04.478467 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:04.502274 2088124 cri.go:89] found id: ""
	I1216 04:15:04.502303 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.502312 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:04.502318 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:04.502379 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:04.526440 2088124 cri.go:89] found id: ""
	I1216 04:15:04.526467 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.526475 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:04.526484 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:04.526494 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:04.581559 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:04.581596 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:04.597786 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:04.597815 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:04.661194 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:04.653077    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.653620    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655229    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655719    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.657352    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:04.653077    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.653620    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655229    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655719    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.657352    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:04.661217 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:04.661230 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:04.686508 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:04.686544 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:07.214226 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:07.226828 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:07.226904 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:07.267774 2088124 cri.go:89] found id: ""
	I1216 04:15:07.267805 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.267814 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:07.267820 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:07.267880 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:07.293953 2088124 cri.go:89] found id: ""
	I1216 04:15:07.293980 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.293988 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:07.293994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:07.294052 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:07.317542 2088124 cri.go:89] found id: ""
	I1216 04:15:07.317568 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.317577 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:07.317583 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:07.317695 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:07.351422 2088124 cri.go:89] found id: ""
	I1216 04:15:07.351449 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.351458 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:07.351465 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:07.351552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:07.376043 2088124 cri.go:89] found id: ""
	I1216 04:15:07.376069 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.376092 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:07.376121 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:07.376204 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:07.400719 2088124 cri.go:89] found id: ""
	I1216 04:15:07.400749 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.400758 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:07.400765 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:07.400849 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:07.425726 2088124 cri.go:89] found id: ""
	I1216 04:15:07.425754 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.425763 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:07.425769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:07.425833 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:07.450385 2088124 cri.go:89] found id: ""
	I1216 04:15:07.450413 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.450422 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:07.450431 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:07.450444 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:07.482416 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:07.482446 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:07.543525 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:07.543569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:07.559963 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:07.559991 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:07.626193 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:07.617713    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.618478    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620010    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620349    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.621831    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:07.617713    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.618478    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620010    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620349    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.621831    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:07.626217 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:07.626233 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:10.151663 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:10.162850 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:10.162922 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:10.218459 2088124 cri.go:89] found id: ""
	I1216 04:15:10.218492 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.218502 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:10.218508 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:10.218581 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:10.266687 2088124 cri.go:89] found id: ""
	I1216 04:15:10.266716 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.266726 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:10.266732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:10.266794 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:10.297579 2088124 cri.go:89] found id: ""
	I1216 04:15:10.297607 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.297616 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:10.297623 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:10.297682 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:10.327612 2088124 cri.go:89] found id: ""
	I1216 04:15:10.327637 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.327646 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:10.327652 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:10.327710 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:10.352049 2088124 cri.go:89] found id: ""
	I1216 04:15:10.352073 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.352082 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:10.352088 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:10.352150 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:10.380981 2088124 cri.go:89] found id: ""
	I1216 04:15:10.381005 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.381013 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:10.381020 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:10.381083 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:10.405173 2088124 cri.go:89] found id: ""
	I1216 04:15:10.405198 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.405207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:10.405213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:10.405271 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:10.430194 2088124 cri.go:89] found id: ""
	I1216 04:15:10.430219 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.430248 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:10.430259 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:10.430272 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:10.486344 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:10.486381 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:10.502248 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:10.502278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:10.568856 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:10.561184    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.561736    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563232    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563538    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.565000    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:10.561184    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.561736    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563232    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563538    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.565000    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:10.568879 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:10.568893 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:10.595314 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:10.595349 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:13.125478 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:13.136862 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:13.136937 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:13.162398 2088124 cri.go:89] found id: ""
	I1216 04:15:13.162432 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.162442 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:13.162449 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:13.162512 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:13.213417 2088124 cri.go:89] found id: ""
	I1216 04:15:13.213443 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.213451 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:13.213457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:13.213515 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:13.265047 2088124 cri.go:89] found id: ""
	I1216 04:15:13.265074 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.265082 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:13.265089 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:13.265146 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:13.295404 2088124 cri.go:89] found id: ""
	I1216 04:15:13.295431 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.295442 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:13.295448 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:13.295510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:13.320244 2088124 cri.go:89] found id: ""
	I1216 04:15:13.320272 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.320281 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:13.320288 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:13.320347 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:13.343989 2088124 cri.go:89] found id: ""
	I1216 04:15:13.344013 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.344022 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:13.344028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:13.344088 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:13.367813 2088124 cri.go:89] found id: ""
	I1216 04:15:13.367838 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.367847 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:13.367854 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:13.367914 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:13.391747 2088124 cri.go:89] found id: ""
	I1216 04:15:13.391772 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.391782 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:13.391791 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:13.391802 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:13.416337 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:13.416373 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:13.443257 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:13.443286 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:13.501977 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:13.502016 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:13.517698 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:13.517730 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:13.580974 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:13.572384    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.573100    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.574739    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.575073    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.576736    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:13.572384    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.573100    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.574739    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.575073    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.576736    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:16.081274 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:16.092248 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:16.092325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:16.118104 2088124 cri.go:89] found id: ""
	I1216 04:15:16.118128 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.118138 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:16.118145 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:16.118207 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:16.148494 2088124 cri.go:89] found id: ""
	I1216 04:15:16.148519 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.148529 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:16.148535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:16.148600 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:16.177106 2088124 cri.go:89] found id: ""
	I1216 04:15:16.177133 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.177142 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:16.177148 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:16.177209 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:16.225478 2088124 cri.go:89] found id: ""
	I1216 04:15:16.225512 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.225521 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:16.225528 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:16.225601 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:16.263615 2088124 cri.go:89] found id: ""
	I1216 04:15:16.263642 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.263651 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:16.263657 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:16.263717 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:16.288816 2088124 cri.go:89] found id: ""
	I1216 04:15:16.288840 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.288849 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:16.288855 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:16.288915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:16.313866 2088124 cri.go:89] found id: ""
	I1216 04:15:16.313899 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.313909 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:16.313915 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:16.313986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:16.338822 2088124 cri.go:89] found id: ""
	I1216 04:15:16.338847 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.338865 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:16.338874 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:16.338886 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:16.397500 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:16.397535 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:16.413373 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:16.413401 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:16.481369 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:16.473361    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.474033    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475539    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475954    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.477408    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:16.473361    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.474033    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475539    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475954    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.477408    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:16.481391 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:16.481404 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:16.506768 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:16.506801 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:19.036905 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:19.047523 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:19.047594 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:19.071924 2088124 cri.go:89] found id: ""
	I1216 04:15:19.071947 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.071956 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:19.071963 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:19.072020 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:19.096694 2088124 cri.go:89] found id: ""
	I1216 04:15:19.096716 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.096736 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:19.096742 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:19.096808 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:19.122106 2088124 cri.go:89] found id: ""
	I1216 04:15:19.122129 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.122137 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:19.122144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:19.122204 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:19.151300 2088124 cri.go:89] found id: ""
	I1216 04:15:19.151327 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.151337 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:19.151346 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:19.151407 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:19.176879 2088124 cri.go:89] found id: ""
	I1216 04:15:19.176906 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.176915 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:19.176921 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:19.176982 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:19.248606 2088124 cri.go:89] found id: ""
	I1216 04:15:19.248637 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.248646 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:19.248654 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:19.248720 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:19.284067 2088124 cri.go:89] found id: ""
	I1216 04:15:19.284095 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.284105 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:19.284111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:19.284179 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:19.309536 2088124 cri.go:89] found id: ""
	I1216 04:15:19.309564 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.309573 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:19.309583 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:19.309595 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:19.336019 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:19.336059 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:19.363926 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:19.363997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:19.420745 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:19.420779 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:19.437274 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:19.437306 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:19.501939 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:19.493862    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.494691    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496168    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496603    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.498063    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:19.493862    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.494691    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496168    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496603    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.498063    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:22.002831 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:22.019000 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:22.019099 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:22.045729 2088124 cri.go:89] found id: ""
	I1216 04:15:22.045753 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.045762 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:22.045769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:22.045831 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:22.073468 2088124 cri.go:89] found id: ""
	I1216 04:15:22.073494 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.073504 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:22.073511 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:22.073572 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:22.099372 2088124 cri.go:89] found id: ""
	I1216 04:15:22.099397 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.099407 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:22.099413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:22.099475 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:22.124283 2088124 cri.go:89] found id: ""
	I1216 04:15:22.124358 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.124371 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:22.124378 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:22.124509 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:22.149430 2088124 cri.go:89] found id: ""
	I1216 04:15:22.149456 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.149466 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:22.149472 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:22.149532 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:22.179789 2088124 cri.go:89] found id: ""
	I1216 04:15:22.179813 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.179822 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:22.179829 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:22.179920 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:22.233299 2088124 cri.go:89] found id: ""
	I1216 04:15:22.233333 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.233342 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:22.233380 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:22.233495 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:22.281260 2088124 cri.go:89] found id: ""
	I1216 04:15:22.281287 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.281296 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:22.281305 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:22.281354 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:22.299880 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:22.299908 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:22.370389 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:22.359272    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.360819    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.361789    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.363665    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.365341    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:22.359272    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.360819    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.361789    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.363665    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.365341    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:22.370413 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:22.370427 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:22.395585 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:22.395618 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:22.423071 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:22.423103 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:24.979909 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:24.990414 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:24.990487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:25.022894 2088124 cri.go:89] found id: ""
	I1216 04:15:25.022933 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.022942 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:25.022950 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:25.023035 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:25.057555 2088124 cri.go:89] found id: ""
	I1216 04:15:25.057592 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.057602 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:25.057609 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:25.057674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:25.084421 2088124 cri.go:89] found id: ""
	I1216 04:15:25.084446 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.084455 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:25.084462 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:25.084534 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:25.112223 2088124 cri.go:89] found id: ""
	I1216 04:15:25.112249 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.112258 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:25.112266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:25.112340 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:25.138162 2088124 cri.go:89] found id: ""
	I1216 04:15:25.138186 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.138195 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:25.138202 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:25.138262 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:25.165660 2088124 cri.go:89] found id: ""
	I1216 04:15:25.165689 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.165698 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:25.165705 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:25.165775 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:25.213233 2088124 cri.go:89] found id: ""
	I1216 04:15:25.213260 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.213269 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:25.213275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:25.213333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:25.254540 2088124 cri.go:89] found id: ""
	I1216 04:15:25.254567 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.254576 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:25.254586 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:25.254599 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:25.290970 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:25.290997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:25.349010 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:25.349046 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:25.364592 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:25.364626 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:25.428643 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:25.420082    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.420822    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.422550    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.423262    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.424839    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:25.420082    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.420822    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.422550    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.423262    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.424839    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:25.428666 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:25.428680 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:27.954878 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:27.965363 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:27.965430 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:27.991310 2088124 cri.go:89] found id: ""
	I1216 04:15:27.991338 2088124 logs.go:282] 0 containers: []
	W1216 04:15:27.991347 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:27.991354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:27.991416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:28.017496 2088124 cri.go:89] found id: ""
	I1216 04:15:28.017519 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.017528 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:28.017535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:28.017600 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:28.043243 2088124 cri.go:89] found id: ""
	I1216 04:15:28.043267 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.043276 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:28.043282 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:28.043349 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:28.070592 2088124 cri.go:89] found id: ""
	I1216 04:15:28.070620 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.070629 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:28.070635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:28.070705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:28.096408 2088124 cri.go:89] found id: ""
	I1216 04:15:28.096430 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.096439 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:28.096446 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:28.096517 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:28.122523 2088124 cri.go:89] found id: ""
	I1216 04:15:28.122547 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.122556 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:28.122563 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:28.122627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:28.148233 2088124 cri.go:89] found id: ""
	I1216 04:15:28.148256 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.148264 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:28.148270 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:28.148335 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:28.174686 2088124 cri.go:89] found id: ""
	I1216 04:15:28.174715 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.174724 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:28.174733 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:28.174745 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:28.248922 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:28.249042 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:28.270319 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:28.270345 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:28.344544 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:28.335802    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.336388    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338081    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338450    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.339995    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:28.335802    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.336388    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338081    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338450    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.339995    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:28.344568 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:28.344583 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:28.370869 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:28.370905 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:30.901180 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:30.914236 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:30.914316 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:30.943226 2088124 cri.go:89] found id: ""
	I1216 04:15:30.943247 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.943255 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:30.943262 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:30.943320 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:30.969548 2088124 cri.go:89] found id: ""
	I1216 04:15:30.969573 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.969581 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:30.969588 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:30.969648 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:30.996727 2088124 cri.go:89] found id: ""
	I1216 04:15:30.996750 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.996759 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:30.996765 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:30.996823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:31.023099 2088124 cri.go:89] found id: ""
	I1216 04:15:31.023125 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.023133 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:31.023140 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:31.023202 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:31.052543 2088124 cri.go:89] found id: ""
	I1216 04:15:31.052568 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.052577 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:31.052584 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:31.052646 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:31.079096 2088124 cri.go:89] found id: ""
	I1216 04:15:31.079119 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.079128 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:31.079134 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:31.079197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:31.108706 2088124 cri.go:89] found id: ""
	I1216 04:15:31.108777 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.108801 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:31.108815 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:31.108894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:31.138097 2088124 cri.go:89] found id: ""
	I1216 04:15:31.138122 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.138130 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:31.138140 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:31.138152 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:31.163977 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:31.164066 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:31.220358 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:31.220432 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:31.291830 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:31.291912 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:31.307651 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:31.307678 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:31.376724 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:31.367014    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369142    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369906    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371443    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371922    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:31.367014    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369142    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369906    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371443    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371922    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:33.876969 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:33.887678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:33.887751 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:33.911475 2088124 cri.go:89] found id: ""
	I1216 04:15:33.911503 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.911513 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:33.911520 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:33.911581 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:33.936829 2088124 cri.go:89] found id: ""
	I1216 04:15:33.936852 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.936861 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:33.936866 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:33.936924 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:33.961061 2088124 cri.go:89] found id: ""
	I1216 04:15:33.961085 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.961094 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:33.961101 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:33.961168 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:33.985053 2088124 cri.go:89] found id: ""
	I1216 04:15:33.985078 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.985086 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:33.985093 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:33.985154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:34.015083 2088124 cri.go:89] found id: ""
	I1216 04:15:34.015112 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.015122 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:34.015129 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:34.015191 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:34.040899 2088124 cri.go:89] found id: ""
	I1216 04:15:34.040922 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.040930 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:34.040936 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:34.041001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:34.066663 2088124 cri.go:89] found id: ""
	I1216 04:15:34.066744 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.066771 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:34.066792 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:34.066877 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:34.092631 2088124 cri.go:89] found id: ""
	I1216 04:15:34.092708 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.092733 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:34.092749 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:34.092762 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:34.151180 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:34.151218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:34.167672 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:34.167704 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:34.288358 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:34.277084    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.277708    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.280379    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282393    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282865    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:34.277084    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.277708    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.280379    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282393    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282865    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:34.288382 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:34.288395 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:34.313627 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:34.313660 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:36.841874 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:36.852005 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:36.852078 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:36.875574 2088124 cri.go:89] found id: ""
	I1216 04:15:36.875598 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.875608 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:36.875614 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:36.875674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:36.904945 2088124 cri.go:89] found id: ""
	I1216 04:15:36.905021 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.905045 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:36.905057 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:36.905119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:36.930221 2088124 cri.go:89] found id: ""
	I1216 04:15:36.930249 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.930259 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:36.930266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:36.930326 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:36.955843 2088124 cri.go:89] found id: ""
	I1216 04:15:36.955870 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.955880 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:36.955887 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:36.955947 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:36.979492 2088124 cri.go:89] found id: ""
	I1216 04:15:36.979557 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.979583 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:36.979596 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:36.979667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:37.004015 2088124 cri.go:89] found id: ""
	I1216 04:15:37.004045 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.004056 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:37.004064 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:37.004144 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:37.033766 2088124 cri.go:89] found id: ""
	I1216 04:15:37.033841 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.033868 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:37.033887 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:37.033980 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:37.058994 2088124 cri.go:89] found id: ""
	I1216 04:15:37.059087 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.059115 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:37.059132 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:37.059146 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:37.121921 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:37.113226    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.113740    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115405    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115894    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.117543    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:37.113226    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.113740    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115405    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115894    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.117543    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:37.121943 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:37.121956 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:37.148246 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:37.148285 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:37.178974 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:37.179077 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:37.249870 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:37.249909 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:39.789446 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:39.800133 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:39.800214 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:39.824765 2088124 cri.go:89] found id: ""
	I1216 04:15:39.824794 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.824803 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:39.824810 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:39.824872 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:39.849338 2088124 cri.go:89] found id: ""
	I1216 04:15:39.849362 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.849370 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:39.849377 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:39.849435 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:39.873874 2088124 cri.go:89] found id: ""
	I1216 04:15:39.873902 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.873911 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:39.873917 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:39.873976 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:39.899109 2088124 cri.go:89] found id: ""
	I1216 04:15:39.899134 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.899143 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:39.899149 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:39.899210 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:39.924102 2088124 cri.go:89] found id: ""
	I1216 04:15:39.924128 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.924137 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:39.924143 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:39.924208 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:39.949033 2088124 cri.go:89] found id: ""
	I1216 04:15:39.949065 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.949074 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:39.949082 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:39.949144 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:39.975169 2088124 cri.go:89] found id: ""
	I1216 04:15:39.975198 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.975207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:39.975213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:39.975273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:40.028056 2088124 cri.go:89] found id: ""
	I1216 04:15:40.028085 2088124 logs.go:282] 0 containers: []
	W1216 04:15:40.028094 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:40.028104 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:40.028116 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:40.085250 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:40.085285 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:40.101589 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:40.101621 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:40.174562 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:40.165816    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.166569    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.167348    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.168955    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.169429    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:40.165816    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.166569    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.167348    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.168955    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.169429    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:40.174584 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:40.174599 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:40.202884 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:40.202920 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:42.752364 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:42.763300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:42.763369 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:42.792503 2088124 cri.go:89] found id: ""
	I1216 04:15:42.792529 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.792539 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:42.792545 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:42.792608 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:42.821201 2088124 cri.go:89] found id: ""
	I1216 04:15:42.821226 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.821235 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:42.821242 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:42.821304 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:42.847075 2088124 cri.go:89] found id: ""
	I1216 04:15:42.847102 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.847110 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:42.847117 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:42.847179 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:42.871486 2088124 cri.go:89] found id: ""
	I1216 04:15:42.871510 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.871519 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:42.871525 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:42.871589 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:42.896375 2088124 cri.go:89] found id: ""
	I1216 04:15:42.896402 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.896412 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:42.896418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:42.896505 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:42.921735 2088124 cri.go:89] found id: ""
	I1216 04:15:42.921811 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.921844 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:42.921865 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:42.921950 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:42.950925 2088124 cri.go:89] found id: ""
	I1216 04:15:42.950947 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.950955 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:42.950961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:42.951019 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:42.975785 2088124 cri.go:89] found id: ""
	I1216 04:15:42.975809 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.975817 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:42.975826 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:42.975840 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:42.991441 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:42.991473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:43.054494 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:43.046352    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.047054    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.048590    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.049073    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.050630    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:43.046352    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.047054    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.048590    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.049073    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.050630    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:43.054518 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:43.054532 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:43.079941 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:43.079979 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:43.107712 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:43.107738 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:45.663276 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:45.674206 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:45.674325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:45.698711 2088124 cri.go:89] found id: ""
	I1216 04:15:45.698736 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.698745 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:45.698752 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:45.698822 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:45.723389 2088124 cri.go:89] found id: ""
	I1216 04:15:45.723413 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.723422 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:45.723428 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:45.723494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:45.748842 2088124 cri.go:89] found id: ""
	I1216 04:15:45.748919 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.748935 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:45.748942 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:45.749002 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:45.777156 2088124 cri.go:89] found id: ""
	I1216 04:15:45.777236 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.777251 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:45.777265 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:45.777327 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:45.802462 2088124 cri.go:89] found id: ""
	I1216 04:15:45.802494 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.802503 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:45.802510 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:45.802583 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:45.829417 2088124 cri.go:89] found id: ""
	I1216 04:15:45.829442 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.829451 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:45.829458 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:45.829521 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:45.854934 2088124 cri.go:89] found id: ""
	I1216 04:15:45.854962 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.854971 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:45.854977 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:45.855095 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:45.879249 2088124 cri.go:89] found id: ""
	I1216 04:15:45.879272 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.879280 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:45.879289 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:45.879301 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:45.895118 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:45.895155 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:45.958262 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:45.949768    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.950592    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952181    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952681    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.954304    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:45.949768    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.950592    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952181    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952681    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.954304    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:45.958284 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:45.958298 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:45.984226 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:45.984260 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:46.015984 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:46.016011 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:48.576053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:48.586849 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:48.586923 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:48.612368 2088124 cri.go:89] found id: ""
	I1216 04:15:48.612394 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.612404 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:48.612410 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:48.612470 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:48.641259 2088124 cri.go:89] found id: ""
	I1216 04:15:48.641288 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.641297 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:48.641304 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:48.641368 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:48.665587 2088124 cri.go:89] found id: ""
	I1216 04:15:48.665614 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.665624 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:48.665629 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:48.665704 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:48.691123 2088124 cri.go:89] found id: ""
	I1216 04:15:48.691151 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.691160 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:48.691167 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:48.691227 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:48.716275 2088124 cri.go:89] found id: ""
	I1216 04:15:48.716304 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.716314 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:48.716320 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:48.716381 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:48.747209 2088124 cri.go:89] found id: ""
	I1216 04:15:48.747236 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.747244 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:48.747250 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:48.747312 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:48.776967 2088124 cri.go:89] found id: ""
	I1216 04:15:48.776991 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.777001 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:48.777010 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:48.777071 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:48.800940 2088124 cri.go:89] found id: ""
	I1216 04:15:48.800965 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.800975 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:48.800985 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:48.800997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:48.856499 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:48.856533 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:48.872208 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:48.872239 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:48.945493 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:48.936737    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.937621    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939381    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939979    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.941612    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:48.936737    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.937621    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939381    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939979    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.941612    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:48.945516 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:48.945529 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:48.970477 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:48.970510 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:51.499166 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:51.515506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:51.515579 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:51.540271 2088124 cri.go:89] found id: ""
	I1216 04:15:51.540297 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.540306 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:51.540313 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:51.540373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:51.564213 2088124 cri.go:89] found id: ""
	I1216 04:15:51.564235 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.564244 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:51.564250 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:51.564309 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:51.592901 2088124 cri.go:89] found id: ""
	I1216 04:15:51.592924 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.592933 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:51.592939 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:51.593001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:51.617803 2088124 cri.go:89] found id: ""
	I1216 04:15:51.617831 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.617840 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:51.617847 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:51.617906 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:51.643791 2088124 cri.go:89] found id: ""
	I1216 04:15:51.643814 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.643822 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:51.643830 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:51.643894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:51.669293 2088124 cri.go:89] found id: ""
	I1216 04:15:51.669324 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.669335 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:51.669345 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:51.669416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:51.697129 2088124 cri.go:89] found id: ""
	I1216 04:15:51.697155 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.697164 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:51.697170 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:51.697235 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:51.725605 2088124 cri.go:89] found id: ""
	I1216 04:15:51.725631 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.725640 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:51.725650 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:51.725664 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:51.781941 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:51.781976 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:51.798346 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:51.798372 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:51.861456 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:51.853947    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.854888    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.855930    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.856728    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.857532    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:51.853947    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.854888    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.855930    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.856728    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.857532    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:51.861478 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:51.861491 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:51.886476 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:51.886511 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:54.421185 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:54.432641 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:54.432721 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:54.487902 2088124 cri.go:89] found id: ""
	I1216 04:15:54.487936 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.487945 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:54.487952 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:54.488026 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:54.530347 2088124 cri.go:89] found id: ""
	I1216 04:15:54.530372 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.530381 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:54.530387 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:54.530450 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:54.558305 2088124 cri.go:89] found id: ""
	I1216 04:15:54.558339 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.558348 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:54.558354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:54.558423 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:54.584247 2088124 cri.go:89] found id: ""
	I1216 04:15:54.584271 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.584280 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:54.584286 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:54.584347 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:54.608497 2088124 cri.go:89] found id: ""
	I1216 04:15:54.608526 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.608536 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:54.608542 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:54.608601 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:54.634256 2088124 cri.go:89] found id: ""
	I1216 04:15:54.634283 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.634293 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:54.634301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:54.634360 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:54.659092 2088124 cri.go:89] found id: ""
	I1216 04:15:54.659132 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.659141 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:54.659148 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:54.659210 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:54.683797 2088124 cri.go:89] found id: ""
	I1216 04:15:54.683823 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.683832 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:54.683841 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:54.683852 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:54.713212 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:54.713238 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:54.769163 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:54.769199 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:54.784702 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:54.784742 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:54.855379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:54.846290    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.847296    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.848173    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.849670    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.850187    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:54.846290    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.847296    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.848173    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.849670    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.850187    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:54.855412 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:54.855425 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:57.382388 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:57.393144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:57.393234 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:57.418375 2088124 cri.go:89] found id: ""
	I1216 04:15:57.418443 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.418467 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:57.418486 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:57.418574 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:57.495590 2088124 cri.go:89] found id: ""
	I1216 04:15:57.495668 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.495694 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:57.495716 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:57.495813 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:57.536762 2088124 cri.go:89] found id: ""
	I1216 04:15:57.536786 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.536795 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:57.536801 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:57.536859 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:57.573379 2088124 cri.go:89] found id: ""
	I1216 04:15:57.573403 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.573412 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:57.573418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:57.573488 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:57.601415 2088124 cri.go:89] found id: ""
	I1216 04:15:57.601439 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.601447 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:57.601454 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:57.601514 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:57.625828 2088124 cri.go:89] found id: ""
	I1216 04:15:57.625852 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.625860 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:57.625866 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:57.625932 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:57.651508 2088124 cri.go:89] found id: ""
	I1216 04:15:57.651534 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.651543 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:57.651549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:57.651609 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:57.678194 2088124 cri.go:89] found id: ""
	I1216 04:15:57.678228 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.678242 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:57.678252 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:57.678287 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:57.733879 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:57.733916 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:57.750633 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:57.750661 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:57.828100 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:57.813970    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.814594    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.815982    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.822718    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.823836    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:57.813970    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.814594    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.815982    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.822718    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.823836    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:57.828131 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:57.828145 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:57.855013 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:57.855070 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:00.384284 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:00.398189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:00.398285 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:00.442307 2088124 cri.go:89] found id: ""
	I1216 04:16:00.442337 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.442347 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:00.442404 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:00.442487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:00.505962 2088124 cri.go:89] found id: ""
	I1216 04:16:00.505986 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.505994 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:00.506001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:00.506064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:00.548862 2088124 cri.go:89] found id: ""
	I1216 04:16:00.548940 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.548965 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:00.548984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:00.549098 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:00.576916 2088124 cri.go:89] found id: ""
	I1216 04:16:00.576939 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.576948 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:00.576954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:00.577013 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:00.602863 2088124 cri.go:89] found id: ""
	I1216 04:16:00.602891 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.602901 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:00.602907 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:00.602971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:00.628659 2088124 cri.go:89] found id: ""
	I1216 04:16:00.628688 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.628698 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:00.628705 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:00.628771 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:00.654429 2088124 cri.go:89] found id: ""
	I1216 04:16:00.654466 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.654475 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:00.654481 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:00.654556 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:00.679835 2088124 cri.go:89] found id: ""
	I1216 04:16:00.679863 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.679877 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:00.679890 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:00.679901 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:00.738456 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:00.738501 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:00.754802 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:00.754838 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:00.824660 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:00.815557    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.816379    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818197    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818825    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.820610    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:00.815557    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.816379    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818197    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818825    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.820610    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:00.824683 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:00.824698 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:00.850142 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:00.850176 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:03.377190 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:03.388732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:03.388827 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:03.417059 2088124 cri.go:89] found id: ""
	I1216 04:16:03.417082 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.417090 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:03.417096 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:03.417157 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:03.473568 2088124 cri.go:89] found id: ""
	I1216 04:16:03.473591 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.473599 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:03.473605 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:03.473676 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:03.510076 2088124 cri.go:89] found id: ""
	I1216 04:16:03.510097 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.510105 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:03.510111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:03.510170 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:03.546041 2088124 cri.go:89] found id: ""
	I1216 04:16:03.546063 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.546072 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:03.546086 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:03.546148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:03.574587 2088124 cri.go:89] found id: ""
	I1216 04:16:03.574672 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.574704 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:03.574747 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:03.574847 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:03.600940 2088124 cri.go:89] found id: ""
	I1216 04:16:03.600964 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.600973 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:03.600979 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:03.601041 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:03.626500 2088124 cri.go:89] found id: ""
	I1216 04:16:03.626524 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.626537 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:03.626544 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:03.626613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:03.651278 2088124 cri.go:89] found id: ""
	I1216 04:16:03.651345 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.651368 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:03.651386 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:03.651401 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:03.713437 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:03.704982    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.705525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707260    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707865    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.709525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:03.704982    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.705525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707260    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707865    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.709525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:03.713461 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:03.713476 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:03.739122 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:03.739183 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:03.769731 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:03.769761 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:03.825343 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:03.825379 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:06.341217 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:06.351622 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:06.351695 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:06.377192 2088124 cri.go:89] found id: ""
	I1216 04:16:06.377220 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.377229 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:06.377236 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:06.377298 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:06.407491 2088124 cri.go:89] found id: ""
	I1216 04:16:06.407516 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.407524 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:06.407530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:06.407587 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:06.432854 2088124 cri.go:89] found id: ""
	I1216 04:16:06.432881 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.432890 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:06.432896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:06.432954 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:06.508461 2088124 cri.go:89] found id: ""
	I1216 04:16:06.508483 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.508502 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:06.508510 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:06.508572 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:06.537008 2088124 cri.go:89] found id: ""
	I1216 04:16:06.537031 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.537039 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:06.537045 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:06.537102 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:06.563652 2088124 cri.go:89] found id: ""
	I1216 04:16:06.563723 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.563740 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:06.563747 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:06.563841 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:06.589523 2088124 cri.go:89] found id: ""
	I1216 04:16:06.589599 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.589623 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:06.589642 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:06.589725 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:06.615510 2088124 cri.go:89] found id: ""
	I1216 04:16:06.615577 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.615599 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:06.615623 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:06.615655 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:06.670726 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:06.670760 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:06.689463 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:06.689495 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:06.755339 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:06.747246    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.748070    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.749790    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.750091    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.751596    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:06.747246    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.748070    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.749790    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.750091    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.751596    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:06.755362 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:06.755375 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:06.780884 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:06.780917 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:09.313406 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:09.323603 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:09.323673 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:09.351600 2088124 cri.go:89] found id: ""
	I1216 04:16:09.351624 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.351632 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:09.351639 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:09.351699 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:09.375845 2088124 cri.go:89] found id: ""
	I1216 04:16:09.375869 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.375878 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:09.375885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:09.375950 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:09.400733 2088124 cri.go:89] found id: ""
	I1216 04:16:09.400756 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.400764 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:09.400770 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:09.400830 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:09.423762 2088124 cri.go:89] found id: ""
	I1216 04:16:09.423785 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.423793 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:09.423799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:09.423856 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:09.457898 2088124 cri.go:89] found id: ""
	I1216 04:16:09.457971 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.457993 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:09.458014 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:09.458132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:09.507416 2088124 cri.go:89] found id: ""
	I1216 04:16:09.507445 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.507453 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:09.507459 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:09.507518 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:09.542968 2088124 cri.go:89] found id: ""
	I1216 04:16:09.543084 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.543115 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:09.543169 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:09.543294 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:09.568289 2088124 cri.go:89] found id: ""
	I1216 04:16:09.568313 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.568321 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:09.568331 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:09.568343 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:09.630690 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:09.621816    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.622488    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624212    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624769    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.626372    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:09.621816    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.622488    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624212    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624769    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.626372    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:09.630716 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:09.630732 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:09.656388 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:09.656424 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:09.684126 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:09.684152 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:09.742624 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:09.742662 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:12.259263 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:12.269891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:12.269959 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:12.294506 2088124 cri.go:89] found id: ""
	I1216 04:16:12.294532 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.294541 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:12.294546 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:12.294628 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:12.318895 2088124 cri.go:89] found id: ""
	I1216 04:16:12.318924 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.318932 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:12.318938 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:12.318994 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:12.344134 2088124 cri.go:89] found id: ""
	I1216 04:16:12.344158 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.344167 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:12.344173 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:12.344234 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:12.368552 2088124 cri.go:89] found id: ""
	I1216 04:16:12.368574 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.368583 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:12.368590 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:12.368654 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:12.396826 2088124 cri.go:89] found id: ""
	I1216 04:16:12.396854 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.396863 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:12.396870 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:12.396931 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:12.422048 2088124 cri.go:89] found id: ""
	I1216 04:16:12.422076 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.422085 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:12.422092 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:12.422153 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:12.485647 2088124 cri.go:89] found id: ""
	I1216 04:16:12.485669 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.485677 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:12.485684 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:12.485750 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:12.529516 2088124 cri.go:89] found id: ""
	I1216 04:16:12.529539 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.529547 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:12.529557 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:12.529569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:12.545674 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:12.545705 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:12.608192 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:12.599988    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.600547    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602099    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602578    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.604093    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:12.599988    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.600547    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602099    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602578    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.604093    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:12.608257 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:12.608279 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:12.633428 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:12.633463 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:12.661070 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:12.661097 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:15.217877 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:15.228678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:15.228748 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:15.253119 2088124 cri.go:89] found id: ""
	I1216 04:16:15.253143 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.253152 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:15.253158 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:15.253220 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:15.285145 2088124 cri.go:89] found id: ""
	I1216 04:16:15.285168 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.285177 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:15.285183 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:15.285243 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:15.311311 2088124 cri.go:89] found id: ""
	I1216 04:16:15.311339 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.311348 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:15.311355 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:15.311416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:15.336241 2088124 cri.go:89] found id: ""
	I1216 04:16:15.336271 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.336286 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:15.336293 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:15.336354 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:15.362230 2088124 cri.go:89] found id: ""
	I1216 04:16:15.362258 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.362268 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:15.362275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:15.362334 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:15.387340 2088124 cri.go:89] found id: ""
	I1216 04:16:15.387362 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.387371 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:15.387377 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:15.387437 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:15.412173 2088124 cri.go:89] found id: ""
	I1216 04:16:15.412201 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.412210 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:15.412217 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:15.412281 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:15.454276 2088124 cri.go:89] found id: ""
	I1216 04:16:15.454354 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.454378 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:15.454404 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:15.454446 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:15.556767 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:15.556806 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:15.573628 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:15.573670 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:15.638801 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:15.629487    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.630048    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.631845    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.632428    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.634191    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:15.629487    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.630048    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.631845    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.632428    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.634191    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:15.638865 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:15.638886 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:15.663907 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:15.663944 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:18.197135 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:18.208099 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:18.208177 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:18.234350 2088124 cri.go:89] found id: ""
	I1216 04:16:18.234379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.234388 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:18.234394 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:18.234459 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:18.258985 2088124 cri.go:89] found id: ""
	I1216 04:16:18.259013 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.259022 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:18.259028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:18.259110 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:18.284132 2088124 cri.go:89] found id: ""
	I1216 04:16:18.284156 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.284164 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:18.284171 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:18.284230 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:18.309961 2088124 cri.go:89] found id: ""
	I1216 04:16:18.309989 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.309997 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:18.310004 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:18.310108 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:18.336186 2088124 cri.go:89] found id: ""
	I1216 04:16:18.336212 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.336221 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:18.336228 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:18.336289 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:18.361829 2088124 cri.go:89] found id: ""
	I1216 04:16:18.361858 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.361867 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:18.361874 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:18.361934 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:18.388363 2088124 cri.go:89] found id: ""
	I1216 04:16:18.388385 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.388394 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:18.388400 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:18.388463 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:18.416963 2088124 cri.go:89] found id: ""
	I1216 04:16:18.416988 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.416996 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:18.417006 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:18.417018 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:18.500995 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:18.503604 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:18.521452 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:18.521531 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:18.589729 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:18.580618    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.581508    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583296    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583964    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.585797    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:18.580618    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.581508    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583296    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583964    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.585797    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:18.589761 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:18.589775 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:18.616012 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:18.616047 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:21.144794 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:21.155656 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:21.155729 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:21.184379 2088124 cri.go:89] found id: ""
	I1216 04:16:21.184403 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.184411 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:21.184417 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:21.184484 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:21.210137 2088124 cri.go:89] found id: ""
	I1216 04:16:21.210163 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.210172 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:21.210178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:21.210240 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:21.235283 2088124 cri.go:89] found id: ""
	I1216 04:16:21.235307 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.235315 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:21.235321 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:21.235381 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:21.263715 2088124 cri.go:89] found id: ""
	I1216 04:16:21.263738 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.263746 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:21.263753 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:21.263823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:21.287600 2088124 cri.go:89] found id: ""
	I1216 04:16:21.287624 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.287632 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:21.287638 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:21.287698 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:21.315897 2088124 cri.go:89] found id: ""
	I1216 04:16:21.315919 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.315927 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:21.315934 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:21.315993 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:21.339842 2088124 cri.go:89] found id: ""
	I1216 04:16:21.339866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.339874 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:21.339880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:21.339939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:21.364501 2088124 cri.go:89] found id: ""
	I1216 04:16:21.364526 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.364535 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:21.364544 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:21.364556 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:21.379974 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:21.380060 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:21.474639 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:21.442912    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.443871    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.447436    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.448028    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.468365    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:21.442912    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.443871    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.447436    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.448028    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.468365    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:21.474664 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:21.474676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:21.531857 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:21.531938 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:21.561122 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:21.561149 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:24.116616 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:24.126986 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:24.127075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:24.154481 2088124 cri.go:89] found id: ""
	I1216 04:16:24.154507 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.154526 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:24.154533 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:24.154591 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:24.180064 2088124 cri.go:89] found id: ""
	I1216 04:16:24.180087 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.180095 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:24.180103 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:24.180165 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:24.205398 2088124 cri.go:89] found id: ""
	I1216 04:16:24.205424 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.205433 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:24.205440 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:24.205499 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:24.230340 2088124 cri.go:89] found id: ""
	I1216 04:16:24.230369 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.230377 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:24.230384 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:24.230445 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:24.255009 2088124 cri.go:89] found id: ""
	I1216 04:16:24.255056 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.255066 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:24.255072 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:24.255131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:24.280187 2088124 cri.go:89] found id: ""
	I1216 04:16:24.280214 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.280224 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:24.280230 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:24.280287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:24.304688 2088124 cri.go:89] found id: ""
	I1216 04:16:24.304711 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.304720 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:24.304726 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:24.304788 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:24.329482 2088124 cri.go:89] found id: ""
	I1216 04:16:24.329505 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.329514 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:24.329523 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:24.329535 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:24.345077 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:24.345106 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:24.410594 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:24.402842    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.403261    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.404850    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.405187    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.406756    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:24.402842    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.403261    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.404850    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.405187    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.406756    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:24.410665 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:24.410695 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:24.437142 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:24.437180 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:24.512425 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:24.512454 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:27.075945 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:27.086676 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:27.086751 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:27.111375 2088124 cri.go:89] found id: ""
	I1216 04:16:27.111402 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.111411 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:27.111418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:27.111479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:27.136068 2088124 cri.go:89] found id: ""
	I1216 04:16:27.136100 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.136109 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:27.136115 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:27.136174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:27.160473 2088124 cri.go:89] found id: ""
	I1216 04:16:27.160503 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.160513 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:27.160519 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:27.160580 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:27.186608 2088124 cri.go:89] found id: ""
	I1216 04:16:27.186632 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.186639 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:27.186646 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:27.186708 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:27.217149 2088124 cri.go:89] found id: ""
	I1216 04:16:27.217173 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.217182 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:27.217189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:27.217253 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:27.243558 2088124 cri.go:89] found id: ""
	I1216 04:16:27.243583 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.243592 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:27.243598 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:27.243665 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:27.269387 2088124 cri.go:89] found id: ""
	I1216 04:16:27.269415 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.269425 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:27.269433 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:27.269494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:27.296709 2088124 cri.go:89] found id: ""
	I1216 04:16:27.296778 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.296790 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:27.296800 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:27.296811 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:27.327331 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:27.327359 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:27.384171 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:27.384206 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:27.400922 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:27.400958 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:27.528794 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:27.519212    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.519883    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521482    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521988    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.523599    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:27.519212    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.519883    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521482    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521988    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.523599    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:27.528819 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:27.528835 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:30.057685 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:30.079715 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:30.079801 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:30.110026 2088124 cri.go:89] found id: ""
	I1216 04:16:30.110054 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.110063 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:30.110076 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:30.110143 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:30.137960 2088124 cri.go:89] found id: ""
	I1216 04:16:30.137986 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.137994 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:30.138001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:30.138065 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:30.165148 2088124 cri.go:89] found id: ""
	I1216 04:16:30.165177 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.165186 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:30.165194 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:30.165283 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:30.192836 2088124 cri.go:89] found id: ""
	I1216 04:16:30.192866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.192875 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:30.192883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:30.192951 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:30.220187 2088124 cri.go:89] found id: ""
	I1216 04:16:30.220213 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.220227 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:30.220233 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:30.220333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:30.247843 2088124 cri.go:89] found id: ""
	I1216 04:16:30.247872 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.247882 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:30.247889 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:30.247980 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:30.274429 2088124 cri.go:89] found id: ""
	I1216 04:16:30.274454 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.274463 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:30.274470 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:30.274583 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:30.302775 2088124 cri.go:89] found id: ""
	I1216 04:16:30.302809 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.302819 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:30.302844 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:30.302863 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:30.318968 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:30.318999 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:30.383767 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:30.374814    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.375258    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.376919    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.377544    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.379234    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:30.374814    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.375258    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.376919    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.377544    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.379234    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:30.383790 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:30.383804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:30.410095 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:30.410131 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:30.468723 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:30.468804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:33.056394 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:33.067079 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:33.067155 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:33.092150 2088124 cri.go:89] found id: ""
	I1216 04:16:33.092178 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.092188 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:33.092194 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:33.092260 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:33.117824 2088124 cri.go:89] found id: ""
	I1216 04:16:33.117852 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.117861 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:33.117868 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:33.117927 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:33.143646 2088124 cri.go:89] found id: ""
	I1216 04:16:33.143672 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.143680 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:33.143686 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:33.143744 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:33.169791 2088124 cri.go:89] found id: ""
	I1216 04:16:33.169818 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.169826 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:33.169833 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:33.169893 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:33.194288 2088124 cri.go:89] found id: ""
	I1216 04:16:33.194313 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.194323 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:33.194329 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:33.194388 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:33.221028 2088124 cri.go:89] found id: ""
	I1216 04:16:33.221062 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.221071 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:33.221078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:33.221178 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:33.245742 2088124 cri.go:89] found id: ""
	I1216 04:16:33.245769 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.245778 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:33.245784 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:33.245852 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:33.270847 2088124 cri.go:89] found id: ""
	I1216 04:16:33.270870 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.270879 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:33.270888 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:33.270899 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:33.327247 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:33.327283 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:33.342917 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:33.342947 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:33.407775 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:33.399278   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.399947   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401468   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401954   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.403432   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:33.399278   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.399947   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401468   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401954   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.403432   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:33.407796 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:33.407809 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:33.433956 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:33.433990 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:36.019705 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:36.031406 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:36.031494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:36.061621 2088124 cri.go:89] found id: ""
	I1216 04:16:36.061647 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.061657 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:36.061664 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:36.061730 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:36.088137 2088124 cri.go:89] found id: ""
	I1216 04:16:36.088162 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.088171 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:36.088178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:36.088239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:36.113810 2088124 cri.go:89] found id: ""
	I1216 04:16:36.113833 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.113842 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:36.113849 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:36.113913 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:36.139840 2088124 cri.go:89] found id: ""
	I1216 04:16:36.139866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.139874 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:36.139883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:36.139965 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:36.168529 2088124 cri.go:89] found id: ""
	I1216 04:16:36.168553 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.168561 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:36.168567 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:36.168627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:36.196976 2088124 cri.go:89] found id: ""
	I1216 04:16:36.197002 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.197027 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:36.197050 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:36.197133 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:36.221877 2088124 cri.go:89] found id: ""
	I1216 04:16:36.221903 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.221912 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:36.221918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:36.222032 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:36.248921 2088124 cri.go:89] found id: ""
	I1216 04:16:36.248947 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.248956 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:36.248966 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:36.248977 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:36.264593 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:36.264622 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:36.329217 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:36.319688   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.320663   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.322347   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.323118   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.324854   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:36.319688   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.320663   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.322347   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.323118   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.324854   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:36.329239 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:36.329252 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:36.354482 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:36.354514 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:36.382824 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:36.382890 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:38.944004 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:38.957491 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:38.957613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:38.982761 2088124 cri.go:89] found id: ""
	I1216 04:16:38.982787 2088124 logs.go:282] 0 containers: []
	W1216 04:16:38.982796 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:38.982803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:38.982861 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:39.010506 2088124 cri.go:89] found id: ""
	I1216 04:16:39.010532 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.010542 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:39.010549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:39.010630 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:39.035827 2088124 cri.go:89] found id: ""
	I1216 04:16:39.035853 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.035862 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:39.035875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:39.035934 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:39.060421 2088124 cri.go:89] found id: ""
	I1216 04:16:39.060448 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.060457 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:39.060463 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:39.060550 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:39.087481 2088124 cri.go:89] found id: ""
	I1216 04:16:39.087504 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.087512 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:39.087518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:39.087577 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:39.111994 2088124 cri.go:89] found id: ""
	I1216 04:16:39.112028 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.112037 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:39.112044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:39.112114 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:39.136060 2088124 cri.go:89] found id: ""
	I1216 04:16:39.136093 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.136101 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:39.136108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:39.136186 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:39.166063 2088124 cri.go:89] found id: ""
	I1216 04:16:39.166090 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.166099 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:39.166109 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:39.166120 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:39.222912 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:39.222949 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:39.239064 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:39.239096 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:39.305289 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:39.297129   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.297723   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299366   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299828   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.301323   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:39.297129   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.297723   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299366   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299828   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.301323   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:39.305312 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:39.305326 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:39.330965 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:39.330997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:41.862236 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:41.873016 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:41.873089 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:41.900650 2088124 cri.go:89] found id: ""
	I1216 04:16:41.900675 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.900684 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:41.900691 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:41.900754 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:41.924986 2088124 cri.go:89] found id: ""
	I1216 04:16:41.925012 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.925022 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:41.925028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:41.925090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:41.950157 2088124 cri.go:89] found id: ""
	I1216 04:16:41.950182 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.950191 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:41.950197 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:41.950257 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:41.975738 2088124 cri.go:89] found id: ""
	I1216 04:16:41.975763 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.975772 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:41.975778 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:41.975837 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:42.008172 2088124 cri.go:89] found id: ""
	I1216 04:16:42.008203 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.008214 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:42.008221 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:42.008295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:42.036816 2088124 cri.go:89] found id: ""
	I1216 04:16:42.036841 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.036851 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:42.036858 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:42.036969 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:42.066668 2088124 cri.go:89] found id: ""
	I1216 04:16:42.066697 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.066706 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:42.066713 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:42.066787 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:42.098167 2088124 cri.go:89] found id: ""
	I1216 04:16:42.098200 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.098217 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:42.098231 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:42.098245 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:42.184589 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:42.173723   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.175193   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.176598   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.177621   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.179728   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:42.173723   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.175193   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.176598   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.177621   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.179728   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:42.184617 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:42.184635 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:42.214306 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:42.214348 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:42.253172 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:42.253203 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:42.312705 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:42.312757 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:44.831426 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:44.842214 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:44.842287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:44.872804 2088124 cri.go:89] found id: ""
	I1216 04:16:44.872833 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.872843 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:44.872851 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:44.872915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:44.903988 2088124 cri.go:89] found id: ""
	I1216 04:16:44.904064 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.904089 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:44.904108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:44.904200 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:44.930758 2088124 cri.go:89] found id: ""
	I1216 04:16:44.930837 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.930861 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:44.930880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:44.930971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:44.955785 2088124 cri.go:89] found id: ""
	I1216 04:16:44.955809 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.955817 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:44.955823 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:44.955883 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:44.983685 2088124 cri.go:89] found id: ""
	I1216 04:16:44.983762 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.983785 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:44.983800 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:44.983876 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:45.034599 2088124 cri.go:89] found id: ""
	I1216 04:16:45.034623 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.034631 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:45.034639 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:45.034713 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:45.106900 2088124 cri.go:89] found id: ""
	I1216 04:16:45.106927 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.106937 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:45.106945 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:45.107019 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:45.148790 2088124 cri.go:89] found id: ""
	I1216 04:16:45.148816 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.148826 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:45.148837 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:45.148851 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:45.242114 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:45.242166 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:45.275372 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:45.275416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:45.355175 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:45.346532   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.347206   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.348772   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.349213   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.350684   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:45.346532   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.347206   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.348772   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.349213   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.350684   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:45.355241 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:45.355263 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:45.382211 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:45.382248 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:47.915609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:47.927521 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:47.927603 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:47.957165 2088124 cri.go:89] found id: ""
	I1216 04:16:47.957192 2088124 logs.go:282] 0 containers: []
	W1216 04:16:47.957205 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:47.957212 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:47.957278 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:47.983356 2088124 cri.go:89] found id: ""
	I1216 04:16:47.983379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:47.983396 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:47.983408 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:47.983475 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:48.012782 2088124 cri.go:89] found id: ""
	I1216 04:16:48.012807 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.012815 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:48.012822 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:48.012887 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:48.042072 2088124 cri.go:89] found id: ""
	I1216 04:16:48.042096 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.042105 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:48.042111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:48.042172 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:48.066925 2088124 cri.go:89] found id: ""
	I1216 04:16:48.066954 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.066963 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:48.066970 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:48.067032 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:48.097340 2088124 cri.go:89] found id: ""
	I1216 04:16:48.097366 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.097378 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:48.097385 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:48.097470 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:48.126364 2088124 cri.go:89] found id: ""
	I1216 04:16:48.126397 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.126407 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:48.126413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:48.126510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:48.152175 2088124 cri.go:89] found id: ""
	I1216 04:16:48.152199 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.152207 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:48.152217 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:48.152232 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:48.216814 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:48.216861 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:48.235153 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:48.235187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:48.303336 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:48.295476   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.296105   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.297650   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.298118   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.299616   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:48.295476   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.296105   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.297650   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.298118   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.299616   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:48.303404 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:48.303433 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:48.332107 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:48.332175 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:50.863912 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:50.876115 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:50.876205 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:50.902170 2088124 cri.go:89] found id: ""
	I1216 04:16:50.902200 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.902209 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:50.902216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:50.902273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:50.925870 2088124 cri.go:89] found id: ""
	I1216 04:16:50.925903 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.925912 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:50.925918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:50.925986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:50.950257 2088124 cri.go:89] found id: ""
	I1216 04:16:50.950283 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.950293 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:50.950299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:50.950358 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:50.975507 2088124 cri.go:89] found id: ""
	I1216 04:16:50.975531 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.975541 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:50.975547 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:50.975607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:50.999494 2088124 cri.go:89] found id: ""
	I1216 04:16:50.999520 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.999529 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:50.999535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:50.999599 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:51.026658 2088124 cri.go:89] found id: ""
	I1216 04:16:51.026685 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.026694 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:51.026701 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:51.026760 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:51.051749 2088124 cri.go:89] found id: ""
	I1216 04:16:51.051775 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.051784 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:51.051790 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:51.051868 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:51.076898 2088124 cri.go:89] found id: ""
	I1216 04:16:51.076927 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.076938 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:51.076948 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:51.076960 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:51.103255 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:51.103293 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:51.134833 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:51.134859 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:51.193704 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:51.193741 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:51.212900 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:51.212928 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:51.297351 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:51.288083   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.288656   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.289678   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291273   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291873   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:51.288083   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.288656   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.289678   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291273   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291873   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:53.797612 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:53.808331 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:53.808407 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:53.832739 2088124 cri.go:89] found id: ""
	I1216 04:16:53.832807 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.832829 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:53.832850 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:53.832945 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:53.857832 2088124 cri.go:89] found id: ""
	I1216 04:16:53.857869 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.857878 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:53.857885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:53.857954 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:53.885064 2088124 cri.go:89] found id: ""
	I1216 04:16:53.885087 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.885095 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:53.885101 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:53.885158 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:53.913372 2088124 cri.go:89] found id: ""
	I1216 04:16:53.913451 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.913475 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:53.913493 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:53.913586 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:53.940577 2088124 cri.go:89] found id: ""
	I1216 04:16:53.940646 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.940673 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:53.940687 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:53.940764 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:53.966496 2088124 cri.go:89] found id: ""
	I1216 04:16:53.966534 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.966543 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:53.966552 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:53.966623 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:53.992796 2088124 cri.go:89] found id: ""
	I1216 04:16:53.992820 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.992828 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:53.992834 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:53.992896 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:54.019752 2088124 cri.go:89] found id: ""
	I1216 04:16:54.019840 2088124 logs.go:282] 0 containers: []
	W1216 04:16:54.019857 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:54.019868 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:54.019880 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:54.079349 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:54.079394 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:54.098509 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:54.098593 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:54.166447 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:54.157535   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.158625   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160232   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160884   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.162506   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:54.157535   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.158625   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160232   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160884   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.162506   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:54.166510 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:54.166549 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:54.191683 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:54.191718 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:56.719163 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:56.748538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:56.748613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:56.786218 2088124 cri.go:89] found id: ""
	I1216 04:16:56.786244 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.786253 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:56.786259 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:56.786320 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:56.812994 2088124 cri.go:89] found id: ""
	I1216 04:16:56.813016 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.813024 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:56.813031 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:56.813090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:56.841729 2088124 cri.go:89] found id: ""
	I1216 04:16:56.841751 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.841760 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:56.841766 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:56.841825 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:56.870356 2088124 cri.go:89] found id: ""
	I1216 04:16:56.870379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.870387 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:56.870393 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:56.870451 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:56.899841 2088124 cri.go:89] found id: ""
	I1216 04:16:56.899867 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.899877 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:56.899883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:56.899943 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:56.924316 2088124 cri.go:89] found id: ""
	I1216 04:16:56.924343 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.924352 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:56.924359 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:56.924417 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:56.948789 2088124 cri.go:89] found id: ""
	I1216 04:16:56.948815 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.948824 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:56.948830 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:56.948891 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:56.977394 2088124 cri.go:89] found id: ""
	I1216 04:16:56.977423 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.977432 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:56.977441 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:56.977453 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:57.032732 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:57.032770 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:57.048273 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:57.048302 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:57.115644 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:57.106949   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.107590   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109199   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109751   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.111454   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:57.106949   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.107590   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109199   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109751   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.111454   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:57.115665 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:57.115685 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:57.140936 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:57.140971 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:59.669285 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:59.682343 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:59.682415 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:59.722722 2088124 cri.go:89] found id: ""
	I1216 04:16:59.722750 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.722758 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:59.722764 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:59.722824 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:59.778634 2088124 cri.go:89] found id: ""
	I1216 04:16:59.778659 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.778667 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:59.778674 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:59.778733 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:59.817378 2088124 cri.go:89] found id: ""
	I1216 04:16:59.817470 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.817498 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:59.817538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:59.817644 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:59.848330 2088124 cri.go:89] found id: ""
	I1216 04:16:59.848356 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.848365 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:59.848372 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:59.848459 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:59.880033 2088124 cri.go:89] found id: ""
	I1216 04:16:59.880061 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.880074 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:59.880080 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:59.880154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:59.909206 2088124 cri.go:89] found id: ""
	I1216 04:16:59.909231 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.909241 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:59.909248 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:59.909351 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:59.934604 2088124 cri.go:89] found id: ""
	I1216 04:16:59.934630 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.934639 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:59.934646 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:59.934708 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:59.959916 2088124 cri.go:89] found id: ""
	I1216 04:16:59.959994 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.960011 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:59.960022 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:59.960035 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:00.015911 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:00.016018 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:00.105766 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:00.105818 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:00.319730 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:00.290488   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.291031   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.293087   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.295468   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.296159   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:00.290488   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.291031   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.293087   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.295468   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.296159   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:00.319780 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:00.319793 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:00.371509 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:00.371569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:02.957388 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:02.969075 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:02.969174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:02.996244 2088124 cri.go:89] found id: ""
	I1216 04:17:02.996268 2088124 logs.go:282] 0 containers: []
	W1216 04:17:02.996276 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:02.996283 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:02.996351 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:03.035674 2088124 cri.go:89] found id: ""
	I1216 04:17:03.035699 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.035709 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:03.035716 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:03.035786 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:03.063231 2088124 cri.go:89] found id: ""
	I1216 04:17:03.063262 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.063271 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:03.063278 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:03.063348 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:03.090248 2088124 cri.go:89] found id: ""
	I1216 04:17:03.090277 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.090285 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:03.090292 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:03.090357 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:03.118599 2088124 cri.go:89] found id: ""
	I1216 04:17:03.118628 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.118637 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:03.118643 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:03.118705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:03.145364 2088124 cri.go:89] found id: ""
	I1216 04:17:03.145394 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.145403 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:03.145411 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:03.145476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:03.174022 2088124 cri.go:89] found id: ""
	I1216 04:17:03.174047 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.174057 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:03.174064 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:03.174132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:03.201495 2088124 cri.go:89] found id: ""
	I1216 04:17:03.201518 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.201527 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:03.201537 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:03.201549 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:03.259166 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:03.259202 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:03.276281 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:03.276319 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:03.347465 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:03.338991   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.339696   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341341   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341888   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.343162   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:03.338991   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.339696   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341341   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341888   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.343162   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:03.347486 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:03.347499 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:03.374421 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:03.374460 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:05.905789 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:05.917930 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:05.918028 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:05.944068 2088124 cri.go:89] found id: ""
	I1216 04:17:05.944092 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.944100 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:05.944106 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:05.944170 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:05.971887 2088124 cri.go:89] found id: ""
	I1216 04:17:05.971915 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.971924 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:05.971931 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:05.971998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:05.999415 2088124 cri.go:89] found id: ""
	I1216 04:17:05.999452 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.999467 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:05.999474 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:05.999547 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:06.038021 2088124 cri.go:89] found id: ""
	I1216 04:17:06.038109 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.038128 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:06.038138 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:06.038231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:06.069582 2088124 cri.go:89] found id: ""
	I1216 04:17:06.069610 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.069620 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:06.069626 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:06.069702 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:06.102728 2088124 cri.go:89] found id: ""
	I1216 04:17:06.102753 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.102763 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:06.102770 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:06.102846 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:06.131178 2088124 cri.go:89] found id: ""
	I1216 04:17:06.131372 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.131401 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:06.131420 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:06.131527 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:06.158881 2088124 cri.go:89] found id: ""
	I1216 04:17:06.158966 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.158996 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:06.159061 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:06.159098 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:06.185524 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:06.185554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:06.221206 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:06.221235 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:06.280309 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:06.280357 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:06.297032 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:06.297065 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:06.363186 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:06.354862   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.355590   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357189   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357554   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.359080   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:06.354862   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.355590   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357189   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357554   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.359080   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:08.864854 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:08.875530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:08.875607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:08.900341 2088124 cri.go:89] found id: ""
	I1216 04:17:08.900376 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.900386 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:08.900392 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:08.900453 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:08.924614 2088124 cri.go:89] found id: ""
	I1216 04:17:08.924638 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.924647 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:08.924653 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:08.924715 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:08.949702 2088124 cri.go:89] found id: ""
	I1216 04:17:08.949729 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.949738 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:08.949744 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:08.949803 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:08.973818 2088124 cri.go:89] found id: ""
	I1216 04:17:08.973848 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.973858 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:08.973864 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:08.973923 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:08.999010 2088124 cri.go:89] found id: ""
	I1216 04:17:08.999033 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.999079 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:08.999087 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:08.999149 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:09.030095 2088124 cri.go:89] found id: ""
	I1216 04:17:09.030122 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.030131 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:09.030138 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:09.030198 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:09.054300 2088124 cri.go:89] found id: ""
	I1216 04:17:09.054324 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.054332 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:09.054339 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:09.054397 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:09.078301 2088124 cri.go:89] found id: ""
	I1216 04:17:09.078328 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.078337 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:09.078346 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:09.078358 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:09.106185 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:09.106220 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:09.161474 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:09.161513 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:09.177365 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:09.177394 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:09.242353 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:09.233841   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.234299   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236131   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236653   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.238465   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:09.233841   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.234299   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236131   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236653   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.238465   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:09.242378 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:09.242392 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:11.767582 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:11.779587 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:11.779667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:11.806280 2088124 cri.go:89] found id: ""
	I1216 04:17:11.806308 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.806317 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:11.806323 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:11.806386 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:11.831161 2088124 cri.go:89] found id: ""
	I1216 04:17:11.831187 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.831196 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:11.831203 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:11.831262 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:11.859758 2088124 cri.go:89] found id: ""
	I1216 04:17:11.859781 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.859790 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:11.859796 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:11.859853 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:11.884445 2088124 cri.go:89] found id: ""
	I1216 04:17:11.884473 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.884483 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:11.884489 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:11.884567 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:11.909783 2088124 cri.go:89] found id: ""
	I1216 04:17:11.909860 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.909886 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:11.909904 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:11.909989 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:11.934802 2088124 cri.go:89] found id: ""
	I1216 04:17:11.934833 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.934842 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:11.934848 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:11.934909 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:11.961240 2088124 cri.go:89] found id: ""
	I1216 04:17:11.961318 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.961344 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:11.961358 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:11.961431 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:11.985352 2088124 cri.go:89] found id: ""
	I1216 04:17:11.985380 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.985389 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:11.985404 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:11.985416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:12.050891 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:12.042955   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.043613   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045154   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045461   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.046975   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:12.042955   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.043613   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045154   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045461   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.046975   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:12.050912 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:12.050925 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:12.076153 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:12.076186 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:12.108364 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:12.108393 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:12.164122 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:12.164161 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:14.681316 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:14.698056 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:14.698131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:14.764358 2088124 cri.go:89] found id: ""
	I1216 04:17:14.764382 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.764391 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:14.764397 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:14.764468 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:14.792079 2088124 cri.go:89] found id: ""
	I1216 04:17:14.792110 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.792120 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:14.792130 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:14.792197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:14.817831 2088124 cri.go:89] found id: ""
	I1216 04:17:14.817857 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.817867 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:14.817875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:14.817935 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:14.846609 2088124 cri.go:89] found id: ""
	I1216 04:17:14.846638 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.846646 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:14.846653 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:14.846712 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:14.871213 2088124 cri.go:89] found id: ""
	I1216 04:17:14.871237 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.871246 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:14.871255 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:14.871313 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:14.896165 2088124 cri.go:89] found id: ""
	I1216 04:17:14.896192 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.896201 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:14.896208 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:14.896269 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:14.922595 2088124 cri.go:89] found id: ""
	I1216 04:17:14.922621 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.922629 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:14.922635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:14.922698 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:14.949236 2088124 cri.go:89] found id: ""
	I1216 04:17:14.949303 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.949327 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:14.949344 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:14.949356 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:15.027151 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:15.009633   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.011301   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.012737   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.013346   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.016022   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:15.009633   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.011301   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.012737   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.013346   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.016022   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:15.027238 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:15.027269 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:15.060605 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:15.060646 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:15.093643 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:15.093728 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:15.150597 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:15.150635 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:17.668643 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:17.679947 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:17.680020 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:17.722386 2088124 cri.go:89] found id: ""
	I1216 04:17:17.722409 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.722417 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:17.722423 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:17.722487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:17.775941 2088124 cri.go:89] found id: ""
	I1216 04:17:17.775964 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.775974 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:17.775980 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:17.776040 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:17.802436 2088124 cri.go:89] found id: ""
	I1216 04:17:17.802458 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.802467 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:17.802473 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:17.802532 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:17.828371 2088124 cri.go:89] found id: ""
	I1216 04:17:17.828399 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.828409 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:17.828415 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:17.828479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:17.853344 2088124 cri.go:89] found id: ""
	I1216 04:17:17.853370 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.853379 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:17.853386 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:17.853479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:17.881429 2088124 cri.go:89] found id: ""
	I1216 04:17:17.881456 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.881465 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:17.881471 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:17.881533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:17.904862 2088124 cri.go:89] found id: ""
	I1216 04:17:17.904938 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.904961 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:17.904975 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:17.905050 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:17.929897 2088124 cri.go:89] found id: ""
	I1216 04:17:17.929977 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.930001 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:17.930028 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:17.930064 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:17.998744 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:17.990296   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.990760   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.992790   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.993233   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.994439   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:17.990296   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.990760   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.992790   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.993233   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.994439   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:17.998813 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:17.998840 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:18.026132 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:18.026171 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:18.058645 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:18.058676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:18.115432 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:18.115467 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:20.631899 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:20.643452 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:20.643535 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:20.668165 2088124 cri.go:89] found id: ""
	I1216 04:17:20.668190 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.668199 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:20.668205 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:20.668263 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:20.724732 2088124 cri.go:89] found id: ""
	I1216 04:17:20.724759 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.724768 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:20.724774 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:20.724845 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:20.771015 2088124 cri.go:89] found id: ""
	I1216 04:17:20.771058 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.771068 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:20.771075 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:20.771155 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:20.805632 2088124 cri.go:89] found id: ""
	I1216 04:17:20.805662 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.805672 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:20.805679 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:20.805747 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:20.835160 2088124 cri.go:89] found id: ""
	I1216 04:17:20.835226 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.835242 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:20.835249 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:20.835308 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:20.861499 2088124 cri.go:89] found id: ""
	I1216 04:17:20.861522 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.861531 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:20.861538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:20.861595 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:20.885895 2088124 cri.go:89] found id: ""
	I1216 04:17:20.885919 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.885928 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:20.885934 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:20.885998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:20.910445 2088124 cri.go:89] found id: ""
	I1216 04:17:20.910468 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.910477 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:20.910486 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:20.910498 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:20.966176 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:20.966211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:20.983062 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:20.983092 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:21.049819 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:21.041149   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.041819   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.043484   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.044157   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.045775   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:21.041149   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.041819   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.043484   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.044157   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.045775   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:21.049842 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:21.049856 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:21.075330 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:21.075370 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:23.603121 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:23.613760 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:23.613834 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:23.642856 2088124 cri.go:89] found id: ""
	I1216 04:17:23.642882 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.642890 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:23.642897 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:23.642957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:23.671150 2088124 cri.go:89] found id: ""
	I1216 04:17:23.671175 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.671183 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:23.671189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:23.671247 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:23.733230 2088124 cri.go:89] found id: ""
	I1216 04:17:23.733256 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.733265 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:23.733271 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:23.733330 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:23.782653 2088124 cri.go:89] found id: ""
	I1216 04:17:23.782679 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.782688 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:23.782694 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:23.782759 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:23.810224 2088124 cri.go:89] found id: ""
	I1216 04:17:23.810249 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.810259 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:23.810266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:23.810327 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:23.835579 2088124 cri.go:89] found id: ""
	I1216 04:17:23.835604 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.835613 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:23.835620 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:23.835680 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:23.864585 2088124 cri.go:89] found id: ""
	I1216 04:17:23.864610 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.864618 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:23.864625 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:23.864683 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:23.892217 2088124 cri.go:89] found id: ""
	I1216 04:17:23.892294 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.892311 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:23.892322 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:23.892334 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:23.955889 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:23.947392   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.947993   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949516   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949846   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.951412   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:23.947392   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.947993   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949516   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949846   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.951412   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:23.955910 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:23.955929 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:23.983017 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:23.983064 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:24.018919 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:24.018946 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:24.076537 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:24.076578 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:26.592968 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:26.603896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:26.603971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:26.628560 2088124 cri.go:89] found id: ""
	I1216 04:17:26.628583 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.628591 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:26.628597 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:26.628663 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:26.655525 2088124 cri.go:89] found id: ""
	I1216 04:17:26.655549 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.655558 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:26.655564 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:26.655627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:26.681142 2088124 cri.go:89] found id: ""
	I1216 04:17:26.681169 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.681178 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:26.681185 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:26.681245 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:26.726046 2088124 cri.go:89] found id: ""
	I1216 04:17:26.726069 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.726078 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:26.726084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:26.726145 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:26.761483 2088124 cri.go:89] found id: ""
	I1216 04:17:26.761558 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.761570 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:26.761578 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:26.761670 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:26.804988 2088124 cri.go:89] found id: ""
	I1216 04:17:26.805062 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.805085 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:26.805104 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:26.805191 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:26.835017 2088124 cri.go:89] found id: ""
	I1216 04:17:26.835107 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.835132 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:26.835146 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:26.835222 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:26.864963 2088124 cri.go:89] found id: ""
	I1216 04:17:26.864989 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.864998 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:26.865008 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:26.865020 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:26.920931 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:26.920966 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:26.936801 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:26.936828 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:27.001379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:26.993556   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.993942   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.995579   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.996105   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.997606   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:26.993556   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.993942   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.995579   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.996105   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.997606   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:27.001453 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:27.001473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:27.029301 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:27.029338 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:29.560341 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:29.570732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:29.570810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:29.594792 2088124 cri.go:89] found id: ""
	I1216 04:17:29.594819 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.594828 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:29.594835 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:29.594900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:29.619488 2088124 cri.go:89] found id: ""
	I1216 04:17:29.619514 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.619523 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:29.619530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:29.619589 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:29.644688 2088124 cri.go:89] found id: ""
	I1216 04:17:29.644711 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.644720 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:29.644726 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:29.644792 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:29.670117 2088124 cri.go:89] found id: ""
	I1216 04:17:29.670143 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.670152 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:29.670158 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:29.670246 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:29.744231 2088124 cri.go:89] found id: ""
	I1216 04:17:29.744258 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.744267 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:29.744273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:29.744333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:29.784178 2088124 cri.go:89] found id: ""
	I1216 04:17:29.784201 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.784211 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:29.784217 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:29.784278 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:29.813318 2088124 cri.go:89] found id: ""
	I1216 04:17:29.813341 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.813349 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:29.813355 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:29.813414 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:29.841947 2088124 cri.go:89] found id: ""
	I1216 04:17:29.841973 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.841981 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:29.841991 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:29.842003 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:29.872423 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:29.872449 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:29.927890 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:29.927927 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:29.943872 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:29.943903 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:30.030211 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:30.002270   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.003334   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.011701   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.013525   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.017907   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:30.002270   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.003334   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.011701   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.013525   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.017907   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:30.030233 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:30.030247 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:32.571327 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:32.582193 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:32.582264 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:32.614548 2088124 cri.go:89] found id: ""
	I1216 04:17:32.614575 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.614584 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:32.614591 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:32.614656 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:32.639581 2088124 cri.go:89] found id: ""
	I1216 04:17:32.639609 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.639618 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:32.639624 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:32.639690 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:32.664409 2088124 cri.go:89] found id: ""
	I1216 04:17:32.664431 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.664440 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:32.664446 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:32.664540 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:32.702042 2088124 cri.go:89] found id: ""
	I1216 04:17:32.702068 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.702077 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:32.702083 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:32.702143 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:32.744945 2088124 cri.go:89] found id: ""
	I1216 04:17:32.744972 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.744981 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:32.744988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:32.745073 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:32.789635 2088124 cri.go:89] found id: ""
	I1216 04:17:32.789662 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.789671 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:32.789678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:32.789739 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:32.815679 2088124 cri.go:89] found id: ""
	I1216 04:17:32.815707 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.815717 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:32.815724 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:32.815787 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:32.841170 2088124 cri.go:89] found id: ""
	I1216 04:17:32.841195 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.841204 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:32.841213 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:32.841224 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:32.897709 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:32.897747 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:32.913830 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:32.913862 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:32.978618 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:32.969623   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.970476   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972369   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972985   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.974627   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:32.969623   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.970476   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972369   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972985   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.974627   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:32.978642 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:32.978655 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:33.004220 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:33.004272 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:35.534506 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:35.545218 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:35.545290 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:35.570921 2088124 cri.go:89] found id: ""
	I1216 04:17:35.570949 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.570958 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:35.570965 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:35.571023 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:35.596188 2088124 cri.go:89] found id: ""
	I1216 04:17:35.596216 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.596226 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:35.596232 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:35.596290 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:35.621275 2088124 cri.go:89] found id: ""
	I1216 04:17:35.621298 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.621307 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:35.621313 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:35.621373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:35.646280 2088124 cri.go:89] found id: ""
	I1216 04:17:35.646304 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.646312 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:35.646319 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:35.646380 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:35.674777 2088124 cri.go:89] found id: ""
	I1216 04:17:35.674850 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.674874 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:35.674894 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:35.674969 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:35.734693 2088124 cri.go:89] found id: ""
	I1216 04:17:35.734716 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.734725 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:35.734732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:35.734792 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:35.776099 2088124 cri.go:89] found id: ""
	I1216 04:17:35.776121 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.776129 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:35.776136 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:35.776195 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:35.809643 2088124 cri.go:89] found id: ""
	I1216 04:17:35.809720 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.809744 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:35.809765 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:35.809805 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:35.865415 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:35.865452 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:35.880891 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:35.880969 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:35.943467 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:35.935628   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.936425   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938104   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938398   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.939836   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:35.935628   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.936425   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938104   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938398   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.939836   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:35.943485 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:35.943497 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:35.968153 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:35.968187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:38.502135 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:38.512843 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:38.512915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:38.537512 2088124 cri.go:89] found id: ""
	I1216 04:17:38.537537 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.537546 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:38.537553 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:38.537618 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:38.563124 2088124 cri.go:89] found id: ""
	I1216 04:17:38.563159 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.563168 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:38.563174 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:38.563265 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:38.589894 2088124 cri.go:89] found id: ""
	I1216 04:17:38.589918 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.589927 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:38.589933 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:38.590001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:38.615078 2088124 cri.go:89] found id: ""
	I1216 04:17:38.615104 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.615114 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:38.615120 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:38.615188 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:38.640365 2088124 cri.go:89] found id: ""
	I1216 04:17:38.640397 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.640406 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:38.640416 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:38.640486 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:38.664018 2088124 cri.go:89] found id: ""
	I1216 04:17:38.664095 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.664116 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:38.664125 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:38.664194 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:38.704314 2088124 cri.go:89] found id: ""
	I1216 04:17:38.704341 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.704350 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:38.704356 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:38.704415 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:38.747321 2088124 cri.go:89] found id: ""
	I1216 04:17:38.747349 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.747357 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:38.747366 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:38.747377 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:38.778906 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:38.778937 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:38.846005 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:38.837440   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.837970   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.839654   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.840202   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.841808   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:38.837440   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.837970   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.839654   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.840202   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.841808   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:38.846026 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:38.846039 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:38.872344 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:38.872381 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:38.907009 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:38.907060 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:41.467452 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:41.478044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:41.478160 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:41.505036 2088124 cri.go:89] found id: ""
	I1216 04:17:41.505062 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.505072 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:41.505079 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:41.505163 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:41.533010 2088124 cri.go:89] found id: ""
	I1216 04:17:41.533044 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.533054 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:41.533078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:41.533160 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:41.557094 2088124 cri.go:89] found id: ""
	I1216 04:17:41.557166 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.557181 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:41.557188 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:41.557261 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:41.585674 2088124 cri.go:89] found id: ""
	I1216 04:17:41.585718 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.585727 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:41.585734 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:41.585805 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:41.610276 2088124 cri.go:89] found id: ""
	I1216 04:17:41.610311 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.610320 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:41.610327 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:41.610398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:41.636914 2088124 cri.go:89] found id: ""
	I1216 04:17:41.636981 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.637010 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:41.637025 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:41.637097 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:41.665097 2088124 cri.go:89] found id: ""
	I1216 04:17:41.665161 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.665187 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:41.665202 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:41.665279 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:41.727525 2088124 cri.go:89] found id: ""
	I1216 04:17:41.727553 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.727562 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:41.727571 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:41.727589 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:41.817873 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:41.817913 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:41.834790 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:41.834817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:41.903430 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:41.895682   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.896090   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897605   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897915   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.899505   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:41.895682   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.896090   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897605   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897915   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.899505   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:41.903453 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:41.903465 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:41.928600 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:41.928640 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:44.456049 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:44.466779 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:44.466853 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:44.493084 2088124 cri.go:89] found id: ""
	I1216 04:17:44.493110 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.493119 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:44.493126 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:44.493185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:44.517683 2088124 cri.go:89] found id: ""
	I1216 04:17:44.517717 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.517727 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:44.517734 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:44.517810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:44.541717 2088124 cri.go:89] found id: ""
	I1216 04:17:44.541749 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.541758 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:44.541764 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:44.541830 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:44.565684 2088124 cri.go:89] found id: ""
	I1216 04:17:44.565711 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.565723 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:44.565729 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:44.565796 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:44.590246 2088124 cri.go:89] found id: ""
	I1216 04:17:44.590285 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.590293 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:44.590300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:44.590372 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:44.618255 2088124 cri.go:89] found id: ""
	I1216 04:17:44.618284 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.618292 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:44.618299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:44.618367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:44.648191 2088124 cri.go:89] found id: ""
	I1216 04:17:44.648219 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.648228 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:44.648234 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:44.648295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:44.679499 2088124 cri.go:89] found id: ""
	I1216 04:17:44.679574 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.679598 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:44.679615 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:44.679640 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:44.758228 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:44.758267 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:44.779294 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:44.779331 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:44.858723 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:44.849709   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.850588   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852326   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852658   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.854144   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:44.849709   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.850588   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852326   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852658   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.854144   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:44.858749 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:44.858764 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:44.883969 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:44.884008 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:47.413411 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:47.423987 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:47.424106 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:47.449248 2088124 cri.go:89] found id: ""
	I1216 04:17:47.449314 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.449329 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:47.449336 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:47.449398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:47.475548 2088124 cri.go:89] found id: ""
	I1216 04:17:47.475578 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.475587 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:47.475593 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:47.475655 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:47.500110 2088124 cri.go:89] found id: ""
	I1216 04:17:47.500177 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.500199 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:47.500218 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:47.500306 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:47.530628 2088124 cri.go:89] found id: ""
	I1216 04:17:47.530696 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.530723 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:47.530741 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:47.530826 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:47.556437 2088124 cri.go:89] found id: ""
	I1216 04:17:47.556464 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.556473 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:47.556479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:47.556549 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:47.581048 2088124 cri.go:89] found id: ""
	I1216 04:17:47.581071 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.581081 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:47.581088 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:47.581148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:47.606560 2088124 cri.go:89] found id: ""
	I1216 04:17:47.606588 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.606596 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:47.606603 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:47.606663 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:47.640327 2088124 cri.go:89] found id: ""
	I1216 04:17:47.640352 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.640360 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:47.640370 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:47.640388 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:47.702815 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:47.702920 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:47.736710 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:47.736751 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:47.839518 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:47.830501   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.831305   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833129   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833682   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.835432   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:47.830501   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.831305   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833129   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833682   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.835432   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:47.839540 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:47.839554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:47.865722 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:47.865758 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:50.397056 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:50.409097 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:50.409241 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:50.437681 2088124 cri.go:89] found id: ""
	I1216 04:17:50.437704 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.437714 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:50.437743 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:50.437829 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:50.462756 2088124 cri.go:89] found id: ""
	I1216 04:17:50.462783 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.462791 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:50.462798 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:50.462914 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:50.487724 2088124 cri.go:89] found id: ""
	I1216 04:17:50.487751 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.487760 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:50.487767 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:50.487873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:50.513141 2088124 cri.go:89] found id: ""
	I1216 04:17:50.513208 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.513219 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:50.513237 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:50.513315 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:50.538993 2088124 cri.go:89] found id: ""
	I1216 04:17:50.539100 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.539124 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:50.539144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:50.539231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:50.564296 2088124 cri.go:89] found id: ""
	I1216 04:17:50.564319 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.564328 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:50.564335 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:50.564395 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:50.587840 2088124 cri.go:89] found id: ""
	I1216 04:17:50.587865 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.587874 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:50.587880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:50.587941 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:50.616481 2088124 cri.go:89] found id: ""
	I1216 04:17:50.616555 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.616577 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:50.616595 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:50.616611 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:50.674183 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:50.674218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:50.705566 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:50.705596 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:50.817242 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:50.808677   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.809401   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811128   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811612   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.813344   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:50.808677   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.809401   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811128   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811612   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.813344   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:50.817265 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:50.817278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:50.842758 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:50.842792 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:53.372576 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:53.383245 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:53.383313 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:53.407745 2088124 cri.go:89] found id: ""
	I1216 04:17:53.407767 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.407775 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:53.407781 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:53.407839 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:53.435170 2088124 cri.go:89] found id: ""
	I1216 04:17:53.435194 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.435203 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:53.435209 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:53.435268 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:53.461399 2088124 cri.go:89] found id: ""
	I1216 04:17:53.461426 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.461437 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:53.461443 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:53.461504 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:53.492254 2088124 cri.go:89] found id: ""
	I1216 04:17:53.492279 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.492289 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:53.492295 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:53.492356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:53.515778 2088124 cri.go:89] found id: ""
	I1216 04:17:53.515802 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.515810 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:53.515816 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:53.515875 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:53.539474 2088124 cri.go:89] found id: ""
	I1216 04:17:53.539498 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.539508 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:53.539514 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:53.539576 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:53.565164 2088124 cri.go:89] found id: ""
	I1216 04:17:53.565229 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.565255 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:53.565273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:53.565359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:53.589875 2088124 cri.go:89] found id: ""
	I1216 04:17:53.589941 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.589963 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:53.589984 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:53.590026 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:53.654018 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:53.644813   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.645597   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647434   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647966   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.649437   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:53.644813   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.645597   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647434   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647966   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.649437   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:53.654042 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:53.654059 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:53.679510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:53.679548 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:53.719485 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:53.719514 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:53.792435 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:53.792471 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:56.314262 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:56.325267 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:56.325348 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:56.350886 2088124 cri.go:89] found id: ""
	I1216 04:17:56.350908 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.350917 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:56.350923 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:56.350985 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:56.375203 2088124 cri.go:89] found id: ""
	I1216 04:17:56.375230 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.375239 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:56.375246 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:56.375305 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:56.400956 2088124 cri.go:89] found id: ""
	I1216 04:17:56.400980 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.400988 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:56.400994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:56.401055 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:56.426054 2088124 cri.go:89] found id: ""
	I1216 04:17:56.426077 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.426086 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:56.426093 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:56.426154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:56.451881 2088124 cri.go:89] found id: ""
	I1216 04:17:56.451905 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.451914 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:56.451920 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:56.452029 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:56.483163 2088124 cri.go:89] found id: ""
	I1216 04:17:56.483190 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.483199 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:56.483223 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:56.483297 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:56.509283 2088124 cri.go:89] found id: ""
	I1216 04:17:56.509307 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.509316 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:56.509321 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:56.509386 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:56.533713 2088124 cri.go:89] found id: ""
	I1216 04:17:56.533788 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.533813 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:56.533851 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:56.533883 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:56.591786 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:56.591822 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:56.608010 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:56.608041 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:56.677352 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:56.669278   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.669934   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.671527   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.672102   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.673230   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:56.669278   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.669934   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.671527   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.672102   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.673230   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:56.677375 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:56.677388 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:56.710597 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:56.710632 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:59.260233 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:59.274612 2088124 out.go:203] 
	W1216 04:17:59.277673 2088124 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 04:17:59.277728 2088124 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 04:17:59.277743 2088124 out.go:285] * Related issues:
	W1216 04:17:59.277759 2088124 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1216 04:17:59.277770 2088124 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1216 04:17:59.280576 2088124 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617588600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617668196Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617791517Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617872237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617937228Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618063593Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618133950Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618196685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618268330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618361662Z" level=info msg="Connect containerd service"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618902392Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.619957818Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.633592863Z" level=info msg="Start subscribing containerd event"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.633677981Z" level=info msg="Start recovering state"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.635272389Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.635428480Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673007758Z" level=info msg="Start event monitor"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673059047Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673070189Z" level=info msg="Start streaming server"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673079715Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673091785Z" level=info msg="runtime interface starting up..."
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673098340Z" level=info msg="starting plugins..."
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673130848Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673430452Z" level=info msg="containerd successfully booted in 0.082787s"
	Dec 16 04:11:55 newest-cni-450938 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:18:02.571785   13420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:02.572580   13420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:02.574155   13420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:02.574676   13420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:02.576313   13420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:18:02 up 10:00,  0 user,  load average: 0.59, 0.58, 1.06
	Linux newest-cni-450938 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:17:59 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:17:59 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 16 04:17:59 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:17:59 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:17:59 newest-cni-450938 kubelet[13295]: E1216 04:17:59.832161   13295 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:17:59 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:17:59 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:00 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 16 04:18:00 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:00 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:00 newest-cni-450938 kubelet[13300]: E1216 04:18:00.765014   13300 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:00 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:00 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:01 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 16 04:18:01 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:01 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:01 newest-cni-450938 kubelet[13321]: E1216 04:18:01.500783   13321 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:01 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:01 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:02 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 16 04:18:02 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:02 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:02 newest-cni-450938 kubelet[13333]: E1216 04:18:02.234580   13333 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:02 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:02 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (369.110876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-450938" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (374.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-450938 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (301.687395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-450938 -n newest-cni-450938
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (306.749022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-450938 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (307.42122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-450938 -n newest-cni-450938
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (319.588956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-450938
helpers_test.go:244: (dbg) docker inspect newest-cni-450938:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	        "Created": "2025-12-16T04:01:45.321904496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2088249,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:11:49.715157618Z",
	            "FinishedAt": "2025-12-16T04:11:48.344695153Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hosts",
	        "LogPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65-json.log",
	        "Name": "/newest-cni-450938",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-450938:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-450938",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	                "LowerDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-450938",
	                "Source": "/var/lib/docker/volumes/newest-cni-450938/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-450938",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-450938",
	                "name.minikube.sigs.k8s.io": "newest-cni-450938",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0d040f98e420d560a9e17e89d3d7fe4b27a499b96ccdebe83fcb72878ac3aa5a",
	            "SandboxKey": "/var/run/docker/netns/0d040f98e420",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34669"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34673"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34671"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34672"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-450938": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:64:e7:5f:26:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "961937bd6f37532287f488797e74382e326ca0852d2ef3f8a1d23a546f1f7d1a",
	                    "EndpointID": "06c1897ed9171a5e6bbd198d06b0b6b16523d38b6c9e3e64ec0084e4fa9e4f3b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-450938",
	                        "e2dde4cac2e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (314.201748ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-450938 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-450938 logs -n 25: (1.603892687s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:02 UTC │                     │
	│ stop    │ -p no-preload-255023 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ addons  │ enable dashboard -p no-preload-255023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ start   │ -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	│ stop    │ -p newest-cni-450938 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-450938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ image   │ newest-cni-450938 image list --format=json                                                                                                                                                                                                                 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ pause   │ -p newest-cni-450938 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ unpause │ -p newest-cni-450938 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:11:49
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:11:49.443609 2088124 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:11:49.443766 2088124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:49.443791 2088124 out.go:374] Setting ErrFile to fd 2...
	I1216 04:11:49.443797 2088124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:49.444086 2088124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:11:49.444552 2088124 out.go:368] Setting JSON to false
	I1216 04:11:49.445491 2088124 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35654,"bootTime":1765822656,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:11:49.445560 2088124 start.go:143] virtualization:  
	I1216 04:11:49.450767 2088124 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:11:49.453684 2088124 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:11:49.453830 2088124 notify.go:221] Checking for updates...
	I1216 04:11:49.459490 2088124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:11:49.462425 2088124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:49.465199 2088124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:11:49.468049 2088124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:11:49.470926 2088124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:11:49.474323 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:49.474898 2088124 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:11:49.507547 2088124 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:11:49.507675 2088124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:49.559588 2088124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:11:49.550344871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:49.559694 2088124 docker.go:319] overlay module found
	I1216 04:11:49.564661 2088124 out.go:179] * Using the docker driver based on existing profile
	I1216 04:11:49.567577 2088124 start.go:309] selected driver: docker
	I1216 04:11:49.567592 2088124 start.go:927] validating driver "docker" against &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:49.567688 2088124 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:11:49.568412 2088124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:49.630893 2088124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:11:49.62154899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:49.631269 2088124 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:11:49.631299 2088124 cni.go:84] Creating CNI manager for ""
	I1216 04:11:49.631354 2088124 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:11:49.631398 2088124 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:49.634471 2088124 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:11:49.637273 2088124 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:11:49.640282 2088124 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:11:49.643072 2088124 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:11:49.643109 2088124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:11:49.643124 2088124 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:11:49.643134 2088124 cache.go:65] Caching tarball of preloaded images
	I1216 04:11:49.643213 2088124 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:11:49.643223 2088124 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:11:49.643349 2088124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:11:49.663232 2088124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:11:49.663256 2088124 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:11:49.663277 2088124 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:11:49.663307 2088124 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:11:49.663368 2088124 start.go:364] duration metric: took 37.825µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:11:49.663390 2088124 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:11:49.663398 2088124 fix.go:54] fixHost starting: 
	I1216 04:11:49.663657 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:49.680807 2088124 fix.go:112] recreateIfNeeded on newest-cni-450938: state=Stopped err=<nil>
	W1216 04:11:49.680842 2088124 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:11:49.684150 2088124 out.go:252] * Restarting existing docker container for "newest-cni-450938" ...
	I1216 04:11:49.684240 2088124 cli_runner.go:164] Run: docker start newest-cni-450938
	I1216 04:11:49.955342 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:49.981840 2088124 kic.go:430] container "newest-cni-450938" state is running.
	I1216 04:11:49.982211 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:50.021278 2088124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:11:50.021527 2088124 machine.go:94] provisionDockerMachine start ...
	I1216 04:11:50.021596 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:50.049595 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:50.050060 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:50.050075 2088124 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:11:50.050748 2088124 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:11:53.188290 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:11:53.188358 2088124 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:11:53.188485 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:53.208640 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:53.208973 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:53.208992 2088124 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:11:53.354850 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:11:53.354932 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:53.373349 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:53.373653 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:53.373677 2088124 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:11:53.507317 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:11:53.507346 2088124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:11:53.507369 2088124 ubuntu.go:190] setting up certificates
	I1216 04:11:53.507379 2088124 provision.go:84] configureAuth start
	I1216 04:11:53.507463 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:53.525162 2088124 provision.go:143] copyHostCerts
	I1216 04:11:53.525241 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:11:53.525251 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:11:53.525327 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:11:53.525423 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:11:53.525428 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:11:53.525453 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:11:53.525509 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:11:53.525514 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:11:53.525536 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:11:53.525580 2088124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:11:54.045695 2088124 provision.go:177] copyRemoteCerts
	I1216 04:11:54.045768 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:11:54.045810 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.066867 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.167270 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:11:54.185959 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:11:54.204990 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:11:54.223347 2088124 provision.go:87] duration metric: took 715.940901ms to configureAuth
	I1216 04:11:54.223373 2088124 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:11:54.223571 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:54.223579 2088124 machine.go:97] duration metric: took 4.202043696s to provisionDockerMachine
	I1216 04:11:54.223586 2088124 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:11:54.223597 2088124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:11:54.223657 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:11:54.223694 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.241386 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.339071 2088124 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:11:54.342372 2088124 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:11:54.342404 2088124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:11:54.342417 2088124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:11:54.342476 2088124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:11:54.342569 2088124 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:11:54.342679 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:11:54.350184 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:11:54.367994 2088124 start.go:296] duration metric: took 144.392831ms for postStartSetup
	I1216 04:11:54.368092 2088124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:11:54.368136 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.385560 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.484799 2088124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:11:54.491513 2088124 fix.go:56] duration metric: took 4.828106411s for fixHost
	I1216 04:11:54.491541 2088124 start.go:83] releasing machines lock for "newest-cni-450938", held for 4.82816163s
	I1216 04:11:54.491612 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:54.509094 2088124 ssh_runner.go:195] Run: cat /version.json
	I1216 04:11:54.509138 2088124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:11:54.509150 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.509206 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.527383 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.529259 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.622646 2088124 ssh_runner.go:195] Run: systemctl --version
	I1216 04:11:54.714029 2088124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:11:54.718486 2088124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:11:54.718568 2088124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:11:54.726541 2088124 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:11:54.726568 2088124 start.go:496] detecting cgroup driver to use...
	I1216 04:11:54.726632 2088124 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:11:54.726714 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:11:54.745031 2088124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:11:54.758297 2088124 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:11:54.758370 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:11:54.774348 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:11:54.787565 2088124 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:11:54.906330 2088124 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:11:55.031458 2088124 docker.go:234] disabling docker service ...
	I1216 04:11:55.031602 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:11:55.047495 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:11:55.061071 2088124 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:11:55.176474 2088124 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:11:55.308037 2088124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:11:55.321108 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:11:55.335545 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:11:55.344904 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:11:55.354341 2088124 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:11:55.354432 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:11:55.364241 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:11:55.373363 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:11:55.382311 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:11:55.391427 2088124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:11:55.399573 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:11:55.408617 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:11:55.417842 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:11:55.427155 2088124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:11:55.435028 2088124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:11:55.442465 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:55.555794 2088124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:11:55.675355 2088124 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:11:55.675506 2088124 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:11:55.679491 2088124 start.go:564] Will wait 60s for crictl version
	I1216 04:11:55.679606 2088124 ssh_runner.go:195] Run: which crictl
	I1216 04:11:55.683263 2088124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:11:55.706762 2088124 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:11:55.706911 2088124 ssh_runner.go:195] Run: containerd --version
	I1216 04:11:55.726295 2088124 ssh_runner.go:195] Run: containerd --version
	I1216 04:11:55.754045 2088124 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:11:55.757209 2088124 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:11:55.773141 2088124 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:11:55.777028 2088124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:55.790127 2088124 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:11:55.792976 2088124 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:11:55.793134 2088124 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:11:55.793224 2088124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:55.820865 2088124 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:11:55.820893 2088124 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:11:55.820953 2088124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:55.848708 2088124 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:11:55.848733 2088124 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:11:55.848741 2088124 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:11:55.848865 2088124 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:11:55.848944 2088124 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:11:55.877782 2088124 cni.go:84] Creating CNI manager for ""
	I1216 04:11:55.877809 2088124 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:11:55.877833 2088124 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:11:55.877856 2088124 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:11:55.877980 2088124 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:11:55.878053 2088124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:11:55.886063 2088124 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:11:55.886135 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:11:55.893994 2088124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:11:55.906976 2088124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:11:55.921636 2088124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:11:55.935475 2088124 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:11:55.940181 2088124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:55.958241 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:56.086097 2088124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:56.102803 2088124 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:11:56.102828 2088124 certs.go:195] generating shared ca certs ...
	I1216 04:11:56.102856 2088124 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.103007 2088124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:11:56.103163 2088124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:11:56.103175 2088124 certs.go:257] generating profile certs ...
	I1216 04:11:56.103292 2088124 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:11:56.103376 2088124 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:11:56.103427 2088124 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:11:56.103545 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:11:56.103587 2088124 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:11:56.103600 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:11:56.103627 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:11:56.103658 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:11:56.103686 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:11:56.103735 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:11:56.104338 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:11:56.126254 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:11:56.147493 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:11:56.167667 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:11:56.186450 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:11:56.204453 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:11:56.222875 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:11:56.240385 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:11:56.257955 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:11:56.276171 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:11:56.293848 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:11:56.311719 2088124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:11:56.324807 2088124 ssh_runner.go:195] Run: openssl version
	I1216 04:11:56.331262 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.338764 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:11:56.346054 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.349987 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.350052 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.391179 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:11:56.398825 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.406218 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:11:56.413696 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.417638 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.417705 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.459490 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:11:56.466920 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.474252 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:11:56.481440 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.485119 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.485259 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.526344 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:11:56.533907 2088124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:11:56.537774 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:11:56.578487 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:11:56.619729 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:11:56.660999 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:11:56.702232 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:11:56.744306 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:11:56.785680 2088124 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:56.785803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:11:56.785870 2088124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:11:56.816785 2088124 cri.go:89] found id: ""
	I1216 04:11:56.816890 2088124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:11:56.824683 2088124 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:11:56.824744 2088124 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:11:56.824813 2088124 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:11:56.832253 2088124 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:11:56.832838 2088124 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-450938" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:56.833086 2088124 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-450938" cluster setting kubeconfig missing "newest-cni-450938" context setting]
	I1216 04:11:56.833830 2088124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.835841 2088124 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:11:56.846568 2088124 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 04:11:56.846607 2088124 kubeadm.go:602] duration metric: took 21.839206ms to restartPrimaryControlPlane
	I1216 04:11:56.846659 2088124 kubeadm.go:403] duration metric: took 60.947212ms to StartCluster
	I1216 04:11:56.846683 2088124 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.846774 2088124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:56.847954 2088124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.848288 2088124 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:11:56.848543 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:56.848590 2088124 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:11:56.848653 2088124 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-450938"
	I1216 04:11:56.848667 2088124 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-450938"
	I1216 04:11:56.848690 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.849140 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.849662 2088124 addons.go:70] Setting dashboard=true in profile "newest-cni-450938"
	I1216 04:11:56.849685 2088124 addons.go:239] Setting addon dashboard=true in "newest-cni-450938"
	W1216 04:11:56.849692 2088124 addons.go:248] addon dashboard should already be in state true
	I1216 04:11:56.849725 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.850155 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.851797 2088124 addons.go:70] Setting default-storageclass=true in profile "newest-cni-450938"
	I1216 04:11:56.851835 2088124 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-450938"
	I1216 04:11:56.852230 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.854311 2088124 out.go:179] * Verifying Kubernetes components...
	I1216 04:11:56.857550 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:56.877736 2088124 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 04:11:56.883198 2088124 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 04:11:56.888994 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 04:11:56.889023 2088124 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 04:11:56.889099 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.905463 2088124 addons.go:239] Setting addon default-storageclass=true in "newest-cni-450938"
	I1216 04:11:56.905510 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.905917 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.906132 2088124 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:11:56.909026 2088124 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:56.909049 2088124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:11:56.909124 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.939233 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:56.960779 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:56.969260 2088124 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:56.969285 2088124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:11:56.969344 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.994990 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:57.096083 2088124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:57.153660 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:57.154691 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 04:11:57.154741 2088124 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 04:11:57.179948 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:57.181646 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 04:11:57.181698 2088124 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 04:11:57.220157 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 04:11:57.220192 2088124 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 04:11:57.270420 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 04:11:57.270450 2088124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 04:11:57.289844 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 04:11:57.289925 2088124 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 04:11:57.304564 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 04:11:57.304589 2088124 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 04:11:57.318199 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 04:11:57.318268 2088124 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 04:11:57.331721 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 04:11:57.331747 2088124 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 04:11:57.344689 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:11:57.344766 2088124 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 04:11:57.358118 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:57.937381 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937417 2088124 retry.go:31] will retry after 269.480362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:57.937480 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937486 2088124 retry.go:31] will retry after 229.28952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:57.937664 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937674 2088124 retry.go:31] will retry after 277.329171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937800 2088124 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:11:57.937903 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:58.167607 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:58.207320 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:58.215928 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:58.286306 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.286392 2088124 retry.go:31] will retry after 251.551644ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:58.336689 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.336775 2088124 retry.go:31] will retry after 297.618581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:58.344615 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.344703 2088124 retry.go:31] will retry after 371.748045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.438848 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:58.538550 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:11:58.607193 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.607227 2088124 retry.go:31] will retry after 295.364456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.635597 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:11:58.705620 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.705655 2088124 retry.go:31] will retry after 548.313742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.716963 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:58.791977 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.792012 2088124 retry.go:31] will retry after 352.878163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.903095 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:58.938720 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:11:58.980189 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.980231 2088124 retry.go:31] will retry after 538.903986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.145753 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:59.214092 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.214141 2088124 retry.go:31] will retry after 822.609154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.254394 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:11:59.315668 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.315705 2088124 retry.go:31] will retry after 808.232785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.439021 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:59.520292 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:11:59.580253 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.580290 2088124 retry.go:31] will retry after 1.339162464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.938854 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:00.037859 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:12:00.126588 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:00.271287 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.271330 2088124 retry.go:31] will retry after 1.560463337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:12:00.271395 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.271405 2088124 retry.go:31] will retry after 965.630874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.439512 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:00.919713 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:00.938198 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:01.016821 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.016853 2088124 retry.go:31] will retry after 2.723457612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.238128 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:01.299810 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.299846 2088124 retry.go:31] will retry after 1.407497229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.438022 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:01.832831 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:01.895982 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.896019 2088124 retry.go:31] will retry after 1.861173275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.938295 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:02.438804 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:02.708270 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:02.778471 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:02.778510 2088124 retry.go:31] will retry after 3.48676176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:02.938901 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:03.438053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:03.740586 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:03.758141 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:03.823512 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.823549 2088124 retry.go:31] will retry after 3.513983603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:12:03.840241 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.840277 2088124 retry.go:31] will retry after 3.549700703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.938636 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:04.438975 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:04.938051 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:05.438813 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:05.938079 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:06.265883 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:06.326297 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:06.326330 2088124 retry.go:31] will retry after 5.907729831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:06.438566 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:06.938552 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:07.337994 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:07.390520 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:07.400091 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.400119 2088124 retry.go:31] will retry after 4.07949146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.438412 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:07.458870 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.458913 2088124 retry.go:31] will retry after 5.738742007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.938058 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:08.438048 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:08.938086 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:09.438088 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:09.938071 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:10.438982 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:10.938817 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:11.438560 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:11.480608 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:11.544274 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:11.544311 2088124 retry.go:31] will retry after 7.489839912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:11.938962 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:12.234793 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:12.294760 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:12.294795 2088124 retry.go:31] will retry after 8.284230916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:12.438042 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:12.938369 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:13.198743 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:13.273972 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:13.274008 2088124 retry.go:31] will retry after 8.727161897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:13.438137 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:13.938122 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:14.438053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:14.938105 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:15.438117 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:15.938675 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:16.438275 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:16.938051 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:17.438977 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:17.938090 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:18.438139 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:18.938875 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.034608 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:19.095129 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:19.095161 2088124 retry.go:31] will retry after 13.285449955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:19.438765 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.938027 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:20.438947 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:20.579839 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:20.651187 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:20.651287 2088124 retry.go:31] will retry after 8.595963064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:20.938552 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:21.438919 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:21.938886 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:22.001902 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:22.069854 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:22.069889 2088124 retry.go:31] will retry after 9.875475964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:22.438071 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:22.938057 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:23.438759 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:23.938093 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:24.438012 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:24.938036 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:25.438056 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:25.938060 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:26.438683 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:26.938545 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:27.438839 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:27.938076 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:28.438528 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:28.938942 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:29.247522 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:29.319498 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:29.319530 2088124 retry.go:31] will retry after 11.610992075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:29.438808 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:29.938634 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:30.438853 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:30.938004 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.438055 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.939022 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.945765 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:32.028672 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.028710 2088124 retry.go:31] will retry after 8.660108846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.380884 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:32.438451 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:32.451845 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.451878 2088124 retry.go:31] will retry after 20.587741489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.939020 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:33.438637 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:33.939026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:34.438183 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:34.938889 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:35.438058 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:35.938079 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:36.438040 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:36.938449 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:37.438932 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:37.938711 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:38.438609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:38.938102 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:39.438039 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:39.938131 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:40.438724 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:40.689879 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:40.758598 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:40.758633 2088124 retry.go:31] will retry after 22.619838961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:40.931114 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:12:40.938807 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:41.022703 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:41.022737 2088124 retry.go:31] will retry after 26.329717671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:41.438070 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:41.938106 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:42.438073 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:42.938708 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:43.438842 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:43.938877 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:44.438603 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:44.938026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:45.438387 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:45.938042 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:46.438913 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:46.938061 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:47.438724 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:47.938106 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:48.438105 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:48.938608 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:49.438052 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:49.938137 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:50.438126 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:50.938158 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:51.438047 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:51.938036 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:52.437993 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:52.938585 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:53.040311 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:53.100279 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:53.100315 2088124 retry.go:31] will retry after 25.050501438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:53.438735 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:53.938047 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:54.438981 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:54.938826 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:55.438076 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:55.938982 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:56.438082 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:56.938775 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:12:56.938878 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:12:56.965240 2088124 cri.go:89] found id: ""
	I1216 04:12:56.965267 2088124 logs.go:282] 0 containers: []
	W1216 04:12:56.965275 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:12:56.965282 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:12:56.965342 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:12:56.994326 2088124 cri.go:89] found id: ""
	I1216 04:12:56.994352 2088124 logs.go:282] 0 containers: []
	W1216 04:12:56.994361 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:12:56.994368 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:12:56.994428 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:12:57.023992 2088124 cri.go:89] found id: ""
	I1216 04:12:57.024019 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.024028 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:12:57.024034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:12:57.024096 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:12:57.048533 2088124 cri.go:89] found id: ""
	I1216 04:12:57.048557 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.048564 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:12:57.048571 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:12:57.048633 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:12:57.073452 2088124 cri.go:89] found id: ""
	I1216 04:12:57.073477 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.073489 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:12:57.073495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:12:57.073556 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:12:57.098320 2088124 cri.go:89] found id: ""
	I1216 04:12:57.098343 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.098351 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:12:57.098358 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:12:57.098422 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:12:57.122156 2088124 cri.go:89] found id: ""
	I1216 04:12:57.122178 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.122186 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:12:57.122192 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:12:57.122253 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:12:57.146348 2088124 cri.go:89] found id: ""
	I1216 04:12:57.146371 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.146379 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:12:57.146389 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:12:57.146400 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:12:57.204504 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:12:57.204554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:12:57.222444 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:12:57.222477 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:12:57.295723 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:12:57.287802    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.288197    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.289767    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.290151    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.291759    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:12:57.287802    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.288197    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.289767    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.290151    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.291759    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:12:57.295745 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:12:57.295758 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:12:57.320926 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:12:57.320959 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:12:59.851668 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:59.862238 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:12:59.862307 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:12:59.886311 2088124 cri.go:89] found id: ""
	I1216 04:12:59.886338 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.886346 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:12:59.886353 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:12:59.886412 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:12:59.910403 2088124 cri.go:89] found id: ""
	I1216 04:12:59.910426 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.910434 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:12:59.910440 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:12:59.910498 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:12:59.935230 2088124 cri.go:89] found id: ""
	I1216 04:12:59.935253 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.935262 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:12:59.935268 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:12:59.935329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:12:59.958999 2088124 cri.go:89] found id: ""
	I1216 04:12:59.959022 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.959030 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:12:59.959037 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:12:59.959113 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:12:59.984633 2088124 cri.go:89] found id: ""
	I1216 04:12:59.984655 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.984663 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:12:59.984670 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:12:59.984729 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:00.052821 2088124 cri.go:89] found id: ""
	I1216 04:13:00.052848 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.052857 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:00.052865 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:00.052942 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:00.179259 2088124 cri.go:89] found id: ""
	I1216 04:13:00.179286 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.179295 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:00.179301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:00.179374 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:00.301818 2088124 cri.go:89] found id: ""
	I1216 04:13:00.301845 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.301854 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:00.301865 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:00.301877 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:00.370430 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:00.370474 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:00.387961 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:00.387994 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:00.469934 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:00.460570    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.462248    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.463914    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.464266    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.465844    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:00.460570    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.462248    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.463914    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.464266    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.465844    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:00.470008 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:00.470035 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:00.497033 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:00.497108 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:03.031116 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:03.042155 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:03.042231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:03.067263 2088124 cri.go:89] found id: ""
	I1216 04:13:03.067286 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.067294 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:03.067300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:03.067359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:03.092385 2088124 cri.go:89] found id: ""
	I1216 04:13:03.092411 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.092421 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:03.092434 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:03.092500 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:03.121839 2088124 cri.go:89] found id: ""
	I1216 04:13:03.121866 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.121874 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:03.121881 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:03.121939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:03.145563 2088124 cri.go:89] found id: ""
	I1216 04:13:03.145591 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.145600 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:03.145606 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:03.145674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:03.173280 2088124 cri.go:89] found id: ""
	I1216 04:13:03.173308 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.173317 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:03.173324 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:03.173387 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:03.198437 2088124 cri.go:89] found id: ""
	I1216 04:13:03.198464 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.198472 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:03.198479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:03.198539 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:03.223390 2088124 cri.go:89] found id: ""
	I1216 04:13:03.223417 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.223426 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:03.223433 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:03.223492 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:03.247999 2088124 cri.go:89] found id: ""
	I1216 04:13:03.248027 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.248037 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:03.248046 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:03.248058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:03.273012 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:03.273045 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:03.309023 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:03.309054 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:03.365917 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:03.365958 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:03.379538 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:13:03.383127 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:03.383196 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:03.513399 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:03.513433 2088124 retry.go:31] will retry after 36.39416212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:03.513601 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:03.473858    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.483329    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.484155    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.485970    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.486663    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:03.473858    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.483329    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.484155    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.485970    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.486663    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:06.013933 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:06.025509 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:06.025592 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:06.052215 2088124 cri.go:89] found id: ""
	I1216 04:13:06.052240 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.052251 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:06.052258 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:06.052322 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:06.079261 2088124 cri.go:89] found id: ""
	I1216 04:13:06.079294 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.079303 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:06.079309 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:06.079373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:06.105297 2088124 cri.go:89] found id: ""
	I1216 04:13:06.105320 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.105329 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:06.105335 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:06.105394 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:06.134648 2088124 cri.go:89] found id: ""
	I1216 04:13:06.134671 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.134679 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:06.134685 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:06.134753 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:06.159604 2088124 cri.go:89] found id: ""
	I1216 04:13:06.159627 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.159635 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:06.159641 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:06.159705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:06.189283 2088124 cri.go:89] found id: ""
	I1216 04:13:06.189307 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.189315 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:06.189322 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:06.189431 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:06.214435 2088124 cri.go:89] found id: ""
	I1216 04:13:06.214469 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.214479 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:06.214486 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:06.214553 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:06.240374 2088124 cri.go:89] found id: ""
	I1216 04:13:06.240399 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.240407 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:06.240417 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:06.240465 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:06.297779 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:06.297828 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:06.314788 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:06.314817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:06.383844 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:06.374547    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.375415    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377219    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377642    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.379268    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:06.374547    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.375415    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377219    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377642    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.379268    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:06.383863 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:06.383876 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:06.409175 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:06.409211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:07.353255 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:13:07.417109 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:07.417143 2088124 retry.go:31] will retry after 43.71748827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:08.979175 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:08.990018 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:08.990104 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:09.017028 2088124 cri.go:89] found id: ""
	I1216 04:13:09.017051 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.017060 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:09.017066 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:09.017126 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:09.042381 2088124 cri.go:89] found id: ""
	I1216 04:13:09.042404 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.042413 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:09.042419 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:09.042477 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:09.071646 2088124 cri.go:89] found id: ""
	I1216 04:13:09.071670 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.071679 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:09.071685 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:09.071744 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:09.100697 2088124 cri.go:89] found id: ""
	I1216 04:13:09.100722 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.100730 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:09.100737 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:09.100797 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:09.129662 2088124 cri.go:89] found id: ""
	I1216 04:13:09.129695 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.129704 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:09.129710 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:09.129780 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:09.156770 2088124 cri.go:89] found id: ""
	I1216 04:13:09.156794 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.156802 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:09.156809 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:09.156869 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:09.182436 2088124 cri.go:89] found id: ""
	I1216 04:13:09.182458 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.182466 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:09.182472 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:09.182531 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:09.206146 2088124 cri.go:89] found id: ""
	I1216 04:13:09.206170 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.206177 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:09.206186 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:09.206198 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:09.231510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:09.231544 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:09.260226 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:09.260256 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:09.316036 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:09.316074 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:09.332123 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:09.332153 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:09.399253 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:09.390164    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.390761    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392299    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392750    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.394603    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:09.390164    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.390761    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392299    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392750    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.394603    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:11.899540 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:11.910018 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:11.910090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:11.938505 2088124 cri.go:89] found id: ""
	I1216 04:13:11.938532 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.938541 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:11.938549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:11.938611 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:11.962625 2088124 cri.go:89] found id: ""
	I1216 04:13:11.962654 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.962663 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:11.962681 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:11.962753 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:11.987471 2088124 cri.go:89] found id: ""
	I1216 04:13:11.987497 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.987506 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:11.987512 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:11.987578 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:12.016864 2088124 cri.go:89] found id: ""
	I1216 04:13:12.016892 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.016900 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:12.016907 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:12.016971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:12.042061 2088124 cri.go:89] found id: ""
	I1216 04:13:12.042088 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.042096 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:12.042102 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:12.042163 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:12.071427 2088124 cri.go:89] found id: ""
	I1216 04:13:12.071455 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.071464 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:12.071471 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:12.071533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:12.096407 2088124 cri.go:89] found id: ""
	I1216 04:13:12.096454 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.096463 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:12.096470 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:12.096529 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:12.120925 2088124 cri.go:89] found id: ""
	I1216 04:13:12.120952 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.120961 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:12.120970 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:12.120981 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:12.187317 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:12.178799    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.179379    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181098    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181645    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.183376    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:12.178799    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.179379    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181098    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181645    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.183376    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:12.187390 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:12.187411 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:12.212126 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:12.212162 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:12.243105 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:12.243134 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:12.300571 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:12.300619 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:14.817445 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:14.827746 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:14.827821 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:14.857336 2088124 cri.go:89] found id: ""
	I1216 04:13:14.857363 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.857372 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:14.857379 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:14.857446 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:14.882109 2088124 cri.go:89] found id: ""
	I1216 04:13:14.882137 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.882146 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:14.882152 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:14.882211 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:14.914132 2088124 cri.go:89] found id: ""
	I1216 04:13:14.914161 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.914171 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:14.914178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:14.914239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:14.939185 2088124 cri.go:89] found id: ""
	I1216 04:13:14.939214 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.939223 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:14.939230 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:14.939297 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:14.963568 2088124 cri.go:89] found id: ""
	I1216 04:13:14.963595 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.963604 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:14.963630 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:14.963702 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:14.988853 2088124 cri.go:89] found id: ""
	I1216 04:13:14.988880 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.988889 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:14.988895 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:14.988957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:15.018658 2088124 cri.go:89] found id: ""
	I1216 04:13:15.018685 2088124 logs.go:282] 0 containers: []
	W1216 04:13:15.018694 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:15.018701 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:15.018780 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:15.052902 2088124 cri.go:89] found id: ""
	I1216 04:13:15.052926 2088124 logs.go:282] 0 containers: []
	W1216 04:13:15.052935 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:15.052945 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:15.052956 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:15.110239 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:15.110275 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:15.126429 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:15.126498 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:15.193844 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:15.184934    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.185663    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.187427    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.188101    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.189782    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:15.184934    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.185663    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.187427    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.188101    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.189782    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:15.193874 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:15.193889 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:15.219891 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:15.219925 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:17.752258 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:17.763106 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:17.763180 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:17.789059 2088124 cri.go:89] found id: ""
	I1216 04:13:17.789084 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.789093 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:17.789099 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:17.789158 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:17.817534 2088124 cri.go:89] found id: ""
	I1216 04:13:17.817560 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.817569 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:17.817576 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:17.817637 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:17.843134 2088124 cri.go:89] found id: ""
	I1216 04:13:17.843160 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.843169 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:17.843175 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:17.843240 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:17.868379 2088124 cri.go:89] found id: ""
	I1216 04:13:17.868404 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.868414 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:17.868421 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:17.868490 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:17.893356 2088124 cri.go:89] found id: ""
	I1216 04:13:17.893384 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.893393 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:17.893400 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:17.893463 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:17.921808 2088124 cri.go:89] found id: ""
	I1216 04:13:17.921851 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.921860 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:17.921867 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:17.921928 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:17.947257 2088124 cri.go:89] found id: ""
	I1216 04:13:17.947284 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.947293 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:17.947300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:17.947367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:17.975318 2088124 cri.go:89] found id: ""
	I1216 04:13:17.975345 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.975354 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:17.975364 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:17.975375 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:18.051655 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:18.042648    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.043445    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045243    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045767    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.047547    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:18.042648    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.043445    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045243    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045767    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.047547    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:18.051680 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:18.051693 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:18.078685 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:18.078723 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:18.107761 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:18.107792 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:18.151402 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:13:18.166502 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:18.166585 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 04:13:18.219917 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:18.220071 2088124 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:20.720560 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:20.734518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:20.734605 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:20.771344 2088124 cri.go:89] found id: ""
	I1216 04:13:20.771418 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.771435 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:20.771442 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:20.771517 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:20.801470 2088124 cri.go:89] found id: ""
	I1216 04:13:20.801496 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.801505 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:20.801511 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:20.801591 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:20.826547 2088124 cri.go:89] found id: ""
	I1216 04:13:20.826620 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.826644 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:20.826663 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:20.826747 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:20.852855 2088124 cri.go:89] found id: ""
	I1216 04:13:20.852881 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.852891 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:20.852898 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:20.852986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:20.878623 2088124 cri.go:89] found id: ""
	I1216 04:13:20.878659 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.878668 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:20.878692 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:20.878808 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:20.902864 2088124 cri.go:89] found id: ""
	I1216 04:13:20.902938 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.902964 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:20.902984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:20.903181 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:20.932453 2088124 cri.go:89] found id: ""
	I1216 04:13:20.932480 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.932488 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:20.932495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:20.932552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:20.961972 2088124 cri.go:89] found id: ""
	I1216 04:13:20.962003 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.962012 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:20.962021 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:20.962046 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:21.031620 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:21.021920    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.023404    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.024377    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.025233    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.026911    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:21.021920    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.023404    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.024377    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.025233    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.026911    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:21.031656 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:21.031669 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:21.057107 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:21.057141 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:21.084165 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:21.084195 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:21.144652 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:21.144688 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:23.662474 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:23.672891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:23.672972 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:23.728294 2088124 cri.go:89] found id: ""
	I1216 04:13:23.728317 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.728325 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:23.728332 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:23.728390 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:23.774385 2088124 cri.go:89] found id: ""
	I1216 04:13:23.774414 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.774423 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:23.774429 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:23.774496 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:23.804506 2088124 cri.go:89] found id: ""
	I1216 04:13:23.804531 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.804553 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:23.804560 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:23.804618 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:23.831638 2088124 cri.go:89] found id: ""
	I1216 04:13:23.831674 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.831683 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:23.831689 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:23.831766 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:23.856129 2088124 cri.go:89] found id: ""
	I1216 04:13:23.856155 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.856164 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:23.856172 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:23.856251 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:23.884761 2088124 cri.go:89] found id: ""
	I1216 04:13:23.884787 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.884796 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:23.884803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:23.884905 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:23.913711 2088124 cri.go:89] found id: ""
	I1216 04:13:23.913736 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.913745 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:23.913752 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:23.913810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:23.938590 2088124 cri.go:89] found id: ""
	I1216 04:13:23.938616 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.938625 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:23.938635 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:23.938646 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:23.993972 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:23.994007 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:24.012474 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:24.012506 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:24.080748 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:24.071242    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.071643    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.073210    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.074640    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.075561    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:24.071242    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.071643    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.073210    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.074640    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.075561    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:24.080778 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:24.080791 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:24.110317 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:24.110357 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:26.644643 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:26.655360 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:26.655430 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:26.679082 2088124 cri.go:89] found id: ""
	I1216 04:13:26.679108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.679117 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:26.679124 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:26.679184 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:26.727361 2088124 cri.go:89] found id: ""
	I1216 04:13:26.727389 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.727399 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:26.727405 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:26.727466 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:26.784659 2088124 cri.go:89] found id: ""
	I1216 04:13:26.784688 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.784697 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:26.784703 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:26.784765 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:26.813210 2088124 cri.go:89] found id: ""
	I1216 04:13:26.813237 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.813246 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:26.813253 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:26.813336 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:26.837930 2088124 cri.go:89] found id: ""
	I1216 04:13:26.837955 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.837963 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:26.837970 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:26.838031 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:26.864344 2088124 cri.go:89] found id: ""
	I1216 04:13:26.864369 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.864378 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:26.864385 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:26.864461 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:26.889169 2088124 cri.go:89] found id: ""
	I1216 04:13:26.889195 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.889207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:26.889214 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:26.889298 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:26.913569 2088124 cri.go:89] found id: ""
	I1216 04:13:26.913596 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.913604 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:26.913614 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:26.913644 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:26.929642 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:26.929671 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:26.992130 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:26.983971    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.984717    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986365    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986828    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.988289    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:26.983971    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.984717    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986365    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986828    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.988289    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:26.992154 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:26.992166 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:27.018253 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:27.018291 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:27.047464 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:27.047492 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:29.603162 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:29.613926 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:29.614005 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:29.639664 2088124 cri.go:89] found id: ""
	I1216 04:13:29.639690 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.639700 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:29.639706 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:29.639773 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:29.664287 2088124 cri.go:89] found id: ""
	I1216 04:13:29.664313 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.664322 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:29.664328 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:29.664391 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:29.715854 2088124 cri.go:89] found id: ""
	I1216 04:13:29.715881 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.715890 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:29.715896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:29.715957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:29.775256 2088124 cri.go:89] found id: ""
	I1216 04:13:29.775283 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.775291 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:29.775298 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:29.775359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:29.800860 2088124 cri.go:89] found id: ""
	I1216 04:13:29.800884 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.800893 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:29.800899 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:29.800966 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:29.826179 2088124 cri.go:89] found id: ""
	I1216 04:13:29.826201 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.826209 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:29.826216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:29.826287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:29.851587 2088124 cri.go:89] found id: ""
	I1216 04:13:29.851657 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.851668 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:29.851675 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:29.851771 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:29.876290 2088124 cri.go:89] found id: ""
	I1216 04:13:29.876317 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.876327 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:29.876336 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:29.876351 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:29.934758 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:29.934795 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:29.950904 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:29.950934 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:30.063379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:30.052801    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.053836    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.055700    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.056432    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.058410    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:30.052801    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.053836    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.055700    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.056432    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.058410    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:30.063402 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:30.063416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:30.093513 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:30.093550 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:32.623683 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:32.634450 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:32.634522 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:32.659386 2088124 cri.go:89] found id: ""
	I1216 04:13:32.659411 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.659419 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:32.659426 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:32.659488 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:32.700370 2088124 cri.go:89] found id: ""
	I1216 04:13:32.700397 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.700406 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:32.700413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:32.700483 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:32.757584 2088124 cri.go:89] found id: ""
	I1216 04:13:32.757606 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.757615 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:32.757621 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:32.757683 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:32.803420 2088124 cri.go:89] found id: ""
	I1216 04:13:32.803445 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.803454 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:32.803460 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:32.803523 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:32.828842 2088124 cri.go:89] found id: ""
	I1216 04:13:32.828866 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.828875 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:32.828881 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:32.828949 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:32.853353 2088124 cri.go:89] found id: ""
	I1216 04:13:32.853380 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.853389 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:32.853398 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:32.853501 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:32.877408 2088124 cri.go:89] found id: ""
	I1216 04:13:32.877435 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.877444 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:32.877451 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:32.877510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:32.901743 2088124 cri.go:89] found id: ""
	I1216 04:13:32.901770 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.901780 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:32.901790 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:32.901804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:32.967369 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:32.958798    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.959484    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.960984    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.961467    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.962939    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:32.958798    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.959484    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.960984    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.961467    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.962939    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:32.967394 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:32.967408 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:32.992952 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:32.992987 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:33.022501 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:33.022532 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:33.078417 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:33.078454 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:35.594569 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:35.607352 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:35.607423 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:35.637370 2088124 cri.go:89] found id: ""
	I1216 04:13:35.637394 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.637403 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:35.637409 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:35.637468 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:35.661404 2088124 cri.go:89] found id: ""
	I1216 04:13:35.661428 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.661437 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:35.661443 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:35.661499 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:35.700087 2088124 cri.go:89] found id: ""
	I1216 04:13:35.700110 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.700118 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:35.700124 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:35.700185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:35.753090 2088124 cri.go:89] found id: ""
	I1216 04:13:35.753163 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.753187 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:35.753207 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:35.753322 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:35.783667 2088124 cri.go:89] found id: ""
	I1216 04:13:35.783693 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.783701 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:35.783707 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:35.783783 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:35.808401 2088124 cri.go:89] found id: ""
	I1216 04:13:35.808426 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.808434 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:35.808457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:35.808518 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:35.832934 2088124 cri.go:89] found id: ""
	I1216 04:13:35.833001 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.833014 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:35.833022 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:35.833080 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:35.857857 2088124 cri.go:89] found id: ""
	I1216 04:13:35.857892 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.857902 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:35.857911 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:35.857928 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:35.888212 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:35.888240 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:35.944155 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:35.944191 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:35.960968 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:35.960997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:36.037726 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:36.028639    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.029470    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.030504    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.031100    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.033315    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:36.028639    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.029470    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.030504    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.031100    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.033315    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:36.037753 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:36.037768 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:38.565516 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:38.576078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:38.576153 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:38.603519 2088124 cri.go:89] found id: ""
	I1216 04:13:38.603550 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.603564 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:38.603571 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:38.603642 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:38.630185 2088124 cri.go:89] found id: ""
	I1216 04:13:38.630212 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.630222 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:38.630228 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:38.630295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:38.656496 2088124 cri.go:89] found id: ""
	I1216 04:13:38.656518 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.656527 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:38.656532 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:38.656597 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:38.691354 2088124 cri.go:89] found id: ""
	I1216 04:13:38.691375 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.691384 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:38.691390 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:38.691448 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:38.727377 2088124 cri.go:89] found id: ""
	I1216 04:13:38.727451 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.727476 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:38.727495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:38.727607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:38.792847 2088124 cri.go:89] found id: ""
	I1216 04:13:38.792924 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.792949 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:38.792969 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:38.793082 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:38.819253 2088124 cri.go:89] found id: ""
	I1216 04:13:38.819326 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.819351 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:38.819369 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:38.819479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:38.844536 2088124 cri.go:89] found id: ""
	I1216 04:13:38.844560 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.844569 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:38.844578 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:38.844590 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:38.903226 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:38.903264 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:38.919524 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:38.919556 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:38.983586 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:38.974559    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.975311    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.976854    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.977457    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.979078    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:38.974559    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.975311    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.976854    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.977457    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.979078    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:38.983611 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:38.983625 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:39.009510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:39.009548 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:39.908601 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:13:39.971867 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:39.972017 2088124 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:41.538728 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:41.550610 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:41.550686 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:41.580358 2088124 cri.go:89] found id: ""
	I1216 04:13:41.580388 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.580398 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:41.580405 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:41.580476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:41.609251 2088124 cri.go:89] found id: ""
	I1216 04:13:41.609323 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.609346 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:41.609360 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:41.609437 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:41.634677 2088124 cri.go:89] found id: ""
	I1216 04:13:41.634714 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.634724 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:41.634731 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:41.634811 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:41.660492 2088124 cri.go:89] found id: ""
	I1216 04:13:41.660531 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.660541 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:41.660555 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:41.660624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:41.706922 2088124 cri.go:89] found id: ""
	I1216 04:13:41.706958 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.706967 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:41.706974 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:41.707062 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:41.771121 2088124 cri.go:89] found id: ""
	I1216 04:13:41.771150 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.771160 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:41.771167 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:41.771228 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:41.798371 2088124 cri.go:89] found id: ""
	I1216 04:13:41.798409 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.798418 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:41.798424 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:41.798505 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:41.825080 2088124 cri.go:89] found id: ""
	I1216 04:13:41.825108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.825118 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:41.825128 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:41.825142 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:41.881228 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:41.881264 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:41.897224 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:41.897252 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:41.962985 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:41.954556    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.955066    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.956801    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.957150    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.958760    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:41.954556    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.955066    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.956801    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.957150    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.958760    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:41.963011 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:41.963024 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:41.988969 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:41.989006 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:44.532418 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:44.542803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:44.542915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:44.568416 2088124 cri.go:89] found id: ""
	I1216 04:13:44.568439 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.568457 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:44.568463 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:44.568522 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:44.594143 2088124 cri.go:89] found id: ""
	I1216 04:13:44.594169 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.594179 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:44.594186 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:44.594247 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:44.618788 2088124 cri.go:89] found id: ""
	I1216 04:13:44.618819 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.618828 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:44.618835 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:44.618895 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:44.644302 2088124 cri.go:89] found id: ""
	I1216 04:13:44.644325 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.644333 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:44.644340 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:44.644398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:44.669819 2088124 cri.go:89] found id: ""
	I1216 04:13:44.669842 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.669849 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:44.669855 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:44.669924 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:44.725552 2088124 cri.go:89] found id: ""
	I1216 04:13:44.725575 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.725583 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:44.725589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:44.725650 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:44.765386 2088124 cri.go:89] found id: ""
	I1216 04:13:44.765408 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.765426 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:44.765432 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:44.765491 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:44.793682 2088124 cri.go:89] found id: ""
	I1216 04:13:44.793763 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.793788 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:44.793827 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:44.793857 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:44.852432 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:44.852473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:44.868492 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:44.868520 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:44.931865 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:44.923147    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.923927    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.925715    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.926197    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.927722    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:44.923147    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.923927    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.925715    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.926197    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.927722    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:44.931889 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:44.931903 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:44.957522 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:44.957557 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:47.485499 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:47.496279 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:47.496356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:47.520654 2088124 cri.go:89] found id: ""
	I1216 04:13:47.520681 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.520690 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:47.520696 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:47.520761 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:47.551944 2088124 cri.go:89] found id: ""
	I1216 04:13:47.551978 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.551987 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:47.552001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:47.552065 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:47.578411 2088124 cri.go:89] found id: ""
	I1216 04:13:47.578438 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.578450 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:47.578457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:47.578519 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:47.604018 2088124 cri.go:89] found id: ""
	I1216 04:13:47.604041 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.604049 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:47.604055 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:47.604112 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:47.629467 2088124 cri.go:89] found id: ""
	I1216 04:13:47.629491 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.629499 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:47.629506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:47.629567 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:47.658252 2088124 cri.go:89] found id: ""
	I1216 04:13:47.658280 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.658289 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:47.658295 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:47.658362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:47.683444 2088124 cri.go:89] found id: ""
	I1216 04:13:47.683472 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.683481 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:47.683487 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:47.683548 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:47.745597 2088124 cri.go:89] found id: ""
	I1216 04:13:47.745620 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.745629 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:47.745638 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:47.745650 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:47.788108 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:47.788134 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:47.844259 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:47.844292 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:47.860046 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:47.860078 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:47.931100 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:47.922584    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.923485    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.924699    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.925424    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.927031    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:47.922584    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.923485    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.924699    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.925424    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.927031    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:47.931125 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:47.931139 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:50.458157 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:50.468844 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:50.468915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:50.493698 2088124 cri.go:89] found id: ""
	I1216 04:13:50.493725 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.493735 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:50.493741 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:50.493799 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:50.518623 2088124 cri.go:89] found id: ""
	I1216 04:13:50.518652 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.518664 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:50.518671 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:50.518737 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:50.543940 2088124 cri.go:89] found id: ""
	I1216 04:13:50.543969 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.543978 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:50.543984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:50.544043 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:50.570246 2088124 cri.go:89] found id: ""
	I1216 04:13:50.570283 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.570292 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:50.570299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:50.570374 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:50.596855 2088124 cri.go:89] found id: ""
	I1216 04:13:50.596884 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.596893 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:50.596900 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:50.596965 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:50.622325 2088124 cri.go:89] found id: ""
	I1216 04:13:50.622352 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.622361 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:50.622368 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:50.622428 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:50.647658 2088124 cri.go:89] found id: ""
	I1216 04:13:50.647683 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.647691 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:50.647698 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:50.647760 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:50.672119 2088124 cri.go:89] found id: ""
	I1216 04:13:50.672156 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.672166 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:50.672176 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:50.672187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:50.741830 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:50.741871 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:50.758886 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:50.758917 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:50.843759 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:50.834975    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.835406    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837202    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837673    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.839159    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:50.834975    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.835406    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837202    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837673    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.839159    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:50.843782 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:50.843795 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:50.870242 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:50.870278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:51.134849 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:13:51.199925 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:51.200071 2088124 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:51.205122 2088124 out.go:179] * Enabled addons: 
	I1216 04:13:51.208001 2088124 addons.go:530] duration metric: took 1m54.35940748s for enable addons: enabled=[]
	I1216 04:13:53.399835 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:53.410221 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:53.410292 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:53.442995 2088124 cri.go:89] found id: ""
	I1216 04:13:53.443019 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.443028 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:53.443034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:53.443119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:53.469085 2088124 cri.go:89] found id: ""
	I1216 04:13:53.469108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.469116 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:53.469122 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:53.469185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:53.492673 2088124 cri.go:89] found id: ""
	I1216 04:13:53.492741 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.492764 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:53.492778 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:53.492851 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:53.519461 2088124 cri.go:89] found id: ""
	I1216 04:13:53.519484 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.519493 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:53.519499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:53.519559 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:53.544555 2088124 cri.go:89] found id: ""
	I1216 04:13:53.544578 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.544587 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:53.544593 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:53.544655 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:53.570476 2088124 cri.go:89] found id: ""
	I1216 04:13:53.570499 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.570508 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:53.570514 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:53.570576 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:53.598792 2088124 cri.go:89] found id: ""
	I1216 04:13:53.598814 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.598822 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:53.598828 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:53.598894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:53.627454 2088124 cri.go:89] found id: ""
	I1216 04:13:53.627477 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.627485 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:53.627494 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:53.627505 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:53.684461 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:53.684541 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:53.709962 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:53.710041 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:53.803419 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:53.793865    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.794942    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796672    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796978    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.798420    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:53.793865    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.794942    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796672    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796978    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.798420    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:53.803444 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:53.803462 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:53.829615 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:53.829652 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:56.358195 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:56.368722 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:56.368794 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:56.393334 2088124 cri.go:89] found id: ""
	I1216 04:13:56.393358 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.393367 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:56.393373 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:56.393440 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:56.417912 2088124 cri.go:89] found id: ""
	I1216 04:13:56.417935 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.417944 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:56.417983 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:56.418062 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:56.445420 2088124 cri.go:89] found id: ""
	I1216 04:13:56.445451 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.445461 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:56.445467 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:56.445526 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:56.469454 2088124 cri.go:89] found id: ""
	I1216 04:13:56.469478 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.469487 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:56.469493 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:56.469552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:56.494121 2088124 cri.go:89] found id: ""
	I1216 04:13:56.494145 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.494153 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:56.494165 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:56.494225 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:56.517578 2088124 cri.go:89] found id: ""
	I1216 04:13:56.517602 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.517611 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:56.517637 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:56.517700 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:56.544866 2088124 cri.go:89] found id: ""
	I1216 04:13:56.544891 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.544899 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:56.544941 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:56.545022 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:56.573759 2088124 cri.go:89] found id: ""
	I1216 04:13:56.573787 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.573796 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:56.573805 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:56.573817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:56.599163 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:56.599202 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:56.630921 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:56.630948 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:56.688477 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:56.688553 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:56.720603 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:56.720634 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:56.828200 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:56.818507    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.819366    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821131    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821683    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.823608    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:56.818507    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.819366    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821131    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821683    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.823608    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:59.328466 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:59.339589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:59.339664 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:59.364346 2088124 cri.go:89] found id: ""
	I1216 04:13:59.364373 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.364382 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:59.364389 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:59.364494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:59.393412 2088124 cri.go:89] found id: ""
	I1216 04:13:59.393480 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.393503 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:59.393516 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:59.393590 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:59.422012 2088124 cri.go:89] found id: ""
	I1216 04:13:59.422039 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.422048 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:59.422055 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:59.422111 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:59.447252 2088124 cri.go:89] found id: ""
	I1216 04:13:59.447280 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.447289 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:59.447301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:59.447362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:59.473224 2088124 cri.go:89] found id: ""
	I1216 04:13:59.473253 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.473262 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:59.473269 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:59.473333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:59.498117 2088124 cri.go:89] found id: ""
	I1216 04:13:59.498142 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.498151 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:59.498157 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:59.498218 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:59.531960 2088124 cri.go:89] found id: ""
	I1216 04:13:59.531983 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.531992 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:59.531998 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:59.532064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:59.555530 2088124 cri.go:89] found id: ""
	I1216 04:13:59.555557 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.555567 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:59.555586 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:59.555597 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:59.587567 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:59.587594 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:59.642770 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:59.642808 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:59.658670 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:59.658698 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:59.758071 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:59.747797    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.748997    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.749964    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.751637    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.752201    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:59.747797    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.748997    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.749964    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.751637    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.752201    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:59.758096 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:59.758109 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:02.297267 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:02.308025 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:02.308094 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:02.332912 2088124 cri.go:89] found id: ""
	I1216 04:14:02.332938 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.332947 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:02.332953 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:02.333015 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:02.358723 2088124 cri.go:89] found id: ""
	I1216 04:14:02.358746 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.358754 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:02.358760 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:02.358820 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:02.384845 2088124 cri.go:89] found id: ""
	I1216 04:14:02.384869 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.384878 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:02.384884 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:02.384947 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:02.411300 2088124 cri.go:89] found id: ""
	I1216 04:14:02.411327 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.411337 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:02.411343 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:02.411401 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:02.436448 2088124 cri.go:89] found id: ""
	I1216 04:14:02.436490 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.436500 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:02.436506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:02.436568 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:02.462003 2088124 cri.go:89] found id: ""
	I1216 04:14:02.462030 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.462039 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:02.462045 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:02.462115 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:02.487374 2088124 cri.go:89] found id: ""
	I1216 04:14:02.487398 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.487407 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:02.487414 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:02.487473 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:02.513515 2088124 cri.go:89] found id: ""
	I1216 04:14:02.513541 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.513549 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:02.513559 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:02.513574 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:02.569398 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:02.569439 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:02.585943 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:02.585986 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:02.652956 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:02.644316    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.645186    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.646890    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.647275    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.648908    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:02.644316    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.645186    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.646890    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.647275    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.648908    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:02.653021 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:02.653040 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:02.678261 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:02.678296 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:05.269784 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:05.280500 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:05.280584 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:05.305398 2088124 cri.go:89] found id: ""
	I1216 04:14:05.305424 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.305432 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:05.305439 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:05.305498 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:05.331233 2088124 cri.go:89] found id: ""
	I1216 04:14:05.331256 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.331264 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:05.331270 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:05.331329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:05.356501 2088124 cri.go:89] found id: ""
	I1216 04:14:05.356527 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.356537 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:05.356543 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:05.356605 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:05.383678 2088124 cri.go:89] found id: ""
	I1216 04:14:05.383706 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.383714 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:05.383720 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:05.383819 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:05.408800 2088124 cri.go:89] found id: ""
	I1216 04:14:05.408826 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.408835 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:05.408842 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:05.408900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:05.437636 2088124 cri.go:89] found id: ""
	I1216 04:14:05.437664 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.437673 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:05.437680 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:05.437738 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:05.463588 2088124 cri.go:89] found id: ""
	I1216 04:14:05.463619 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.463628 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:05.463635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:05.463707 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:05.492371 2088124 cri.go:89] found id: ""
	I1216 04:14:05.492399 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.492409 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:05.492418 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:05.492430 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:05.548250 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:05.548287 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:05.564063 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:05.564088 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:05.632904 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:05.624351    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.625146    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.626904    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.627499    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.629025    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:05.624351    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.625146    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.626904    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.627499    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.629025    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:05.632926 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:05.632939 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:05.659343 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:05.659376 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:08.201168 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:08.211739 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:08.211822 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:08.236069 2088124 cri.go:89] found id: ""
	I1216 04:14:08.236097 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.236106 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:08.236118 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:08.236177 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:08.261051 2088124 cri.go:89] found id: ""
	I1216 04:14:08.261075 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.261083 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:08.261089 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:08.261150 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:08.285569 2088124 cri.go:89] found id: ""
	I1216 04:14:08.285592 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.285600 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:08.285606 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:08.285667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:08.311218 2088124 cri.go:89] found id: ""
	I1216 04:14:08.311258 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.311266 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:08.311273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:08.311366 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:08.345673 2088124 cri.go:89] found id: ""
	I1216 04:14:08.345697 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.345706 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:08.345713 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:08.345776 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:08.370418 2088124 cri.go:89] found id: ""
	I1216 04:14:08.370441 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.370449 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:08.370456 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:08.370513 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:08.395107 2088124 cri.go:89] found id: ""
	I1216 04:14:08.395170 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.395196 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:08.395215 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:08.395299 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:08.419032 2088124 cri.go:89] found id: ""
	I1216 04:14:08.419085 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.419094 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:08.419104 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:08.419115 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:08.475411 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:08.475448 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:08.491357 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:08.491391 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:08.557388 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:08.548702    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.549457    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551263    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551890    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.553470    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:08.548702    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.549457    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551263    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551890    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.553470    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:08.557412 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:08.557426 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:08.582743 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:08.582777 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:11.111145 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:11.123009 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:11.123095 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:11.150909 2088124 cri.go:89] found id: ""
	I1216 04:14:11.150934 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.150942 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:11.150949 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:11.151075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:11.182574 2088124 cri.go:89] found id: ""
	I1216 04:14:11.182600 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.182610 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:11.182616 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:11.182719 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:11.208283 2088124 cri.go:89] found id: ""
	I1216 04:14:11.208310 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.208319 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:11.208325 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:11.208417 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:11.237024 2088124 cri.go:89] found id: ""
	I1216 04:14:11.237052 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.237061 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:11.237069 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:11.237132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:11.265167 2088124 cri.go:89] found id: ""
	I1216 04:14:11.265189 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.265197 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:11.265203 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:11.265261 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:11.290122 2088124 cri.go:89] found id: ""
	I1216 04:14:11.290144 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.290152 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:11.290159 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:11.290217 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:11.317188 2088124 cri.go:89] found id: ""
	I1216 04:14:11.317211 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.317219 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:11.317225 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:11.317304 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:11.342140 2088124 cri.go:89] found id: ""
	I1216 04:14:11.342164 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.342173 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:11.342206 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:11.342225 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:11.368021 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:11.368058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:11.397287 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:11.397318 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:11.453124 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:11.453158 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:11.468881 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:11.468910 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:11.535360 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:11.526322    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.526895    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.528582    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.529258    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.530769    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:11.526322    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.526895    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.528582    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.529258    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.530769    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:14.036278 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:14.046954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:14.047104 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:14.072898 2088124 cri.go:89] found id: ""
	I1216 04:14:14.072923 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.072932 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:14.072938 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:14.072998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:14.098005 2088124 cri.go:89] found id: ""
	I1216 04:14:14.098041 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.098049 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:14.098056 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:14.098123 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:14.125919 2088124 cri.go:89] found id: ""
	I1216 04:14:14.125945 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.125954 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:14.125961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:14.126068 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:14.151392 2088124 cri.go:89] found id: ""
	I1216 04:14:14.151416 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.151424 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:14.151430 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:14.151494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:14.181023 2088124 cri.go:89] found id: ""
	I1216 04:14:14.181054 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.181064 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:14.181070 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:14.181139 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:14.206141 2088124 cri.go:89] found id: ""
	I1216 04:14:14.206166 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.206175 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:14.206181 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:14.206250 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:14.230051 2088124 cri.go:89] found id: ""
	I1216 04:14:14.230084 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.230093 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:14.230098 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:14.230183 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:14.255362 2088124 cri.go:89] found id: ""
	I1216 04:14:14.255388 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.255412 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:14.255423 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:14.255434 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:14.310536 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:14.310573 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:14.326390 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:14.326478 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:14.389470 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:14.380411    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.381325    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383077    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383571    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.385261    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:14.380411    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.381325    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383077    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383571    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.385261    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:14.389493 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:14.389512 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:14.415767 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:14.415804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:16.946959 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:16.978797 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:16.978873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:17.023928 2088124 cri.go:89] found id: ""
	I1216 04:14:17.024005 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.024022 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:17.024030 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:17.024092 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:17.049994 2088124 cri.go:89] found id: ""
	I1216 04:14:17.050024 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.050033 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:17.050040 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:17.050122 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:17.075095 2088124 cri.go:89] found id: ""
	I1216 04:14:17.075120 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.075128 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:17.075134 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:17.075195 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:17.103161 2088124 cri.go:89] found id: ""
	I1216 04:14:17.103189 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.103209 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:17.103216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:17.103687 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:17.139217 2088124 cri.go:89] found id: ""
	I1216 04:14:17.139246 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.139255 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:17.139261 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:17.139325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:17.170063 2088124 cri.go:89] found id: ""
	I1216 04:14:17.170091 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.170102 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:17.170108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:17.170186 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:17.195843 2088124 cri.go:89] found id: ""
	I1216 04:14:17.195869 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.195879 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:17.195885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:17.195966 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:17.221935 2088124 cri.go:89] found id: ""
	I1216 04:14:17.221962 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.221971 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:17.222001 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:17.222019 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:17.278612 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:17.278650 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:17.295004 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:17.295076 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:17.359742 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:17.351174    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.351977    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.353668    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.354101    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.355803    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:17.351174    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.351977    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.353668    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.354101    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.355803    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:17.359766 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:17.359779 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:17.385281 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:17.385316 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:19.913504 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:19.924126 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:19.924223 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:19.981102 2088124 cri.go:89] found id: ""
	I1216 04:14:19.981182 2088124 logs.go:282] 0 containers: []
	W1216 04:14:19.981204 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:19.981223 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:19.981319 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:20.025801 2088124 cri.go:89] found id: ""
	I1216 04:14:20.025875 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.025897 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:20.025918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:20.026010 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:20.057062 2088124 cri.go:89] found id: ""
	I1216 04:14:20.057088 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.057097 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:20.057103 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:20.057168 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:20.082749 2088124 cri.go:89] found id: ""
	I1216 04:14:20.082774 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.082783 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:20.082790 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:20.082854 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:20.109626 2088124 cri.go:89] found id: ""
	I1216 04:14:20.109653 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.109663 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:20.109670 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:20.109731 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:20.134934 2088124 cri.go:89] found id: ""
	I1216 04:14:20.134957 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.134980 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:20.134988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:20.135088 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:20.161170 2088124 cri.go:89] found id: ""
	I1216 04:14:20.161197 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.161206 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:20.161213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:20.161299 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:20.187553 2088124 cri.go:89] found id: ""
	I1216 04:14:20.187578 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.187587 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:20.187597 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:20.187629 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:20.255987 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:20.246960    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.247520    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.249493    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.250032    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.251482    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:20.246960    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.247520    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.249493    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.250032    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.251482    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:20.256011 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:20.256024 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:20.281257 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:20.281331 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:20.310693 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:20.310724 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:20.367395 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:20.367436 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:22.883831 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:22.894924 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:22.894999 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:22.920332 2088124 cri.go:89] found id: ""
	I1216 04:14:22.920359 2088124 logs.go:282] 0 containers: []
	W1216 04:14:22.920379 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:22.920386 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:22.920445 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:22.977215 2088124 cri.go:89] found id: ""
	I1216 04:14:22.977243 2088124 logs.go:282] 0 containers: []
	W1216 04:14:22.977252 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:22.977258 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:22.977317 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:23.028698 2088124 cri.go:89] found id: ""
	I1216 04:14:23.028723 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.028732 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:23.028739 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:23.028804 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:23.055098 2088124 cri.go:89] found id: ""
	I1216 04:14:23.055124 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.055133 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:23.055140 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:23.055209 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:23.080450 2088124 cri.go:89] found id: ""
	I1216 04:14:23.080483 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.080493 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:23.080499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:23.080559 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:23.105251 2088124 cri.go:89] found id: ""
	I1216 04:14:23.105275 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.105284 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:23.105296 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:23.105355 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:23.130544 2088124 cri.go:89] found id: ""
	I1216 04:14:23.130573 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.130588 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:23.130594 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:23.130653 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:23.155787 2088124 cri.go:89] found id: ""
	I1216 04:14:23.155863 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.155879 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:23.155889 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:23.155901 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:23.184285 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:23.184315 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:23.240021 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:23.240058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:23.255934 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:23.255969 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:23.324390 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:23.316382    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.316885    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.318697    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.319197    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.320361    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:23.316382    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.316885    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.318697    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.319197    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.320361    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:23.324415 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:23.324432 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:25.850349 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:25.861084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:25.861157 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:25.885912 2088124 cri.go:89] found id: ""
	I1216 04:14:25.885939 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.885947 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:25.885954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:25.886015 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:25.914385 2088124 cri.go:89] found id: ""
	I1216 04:14:25.914408 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.914416 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:25.914422 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:25.914482 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:25.957379 2088124 cri.go:89] found id: ""
	I1216 04:14:25.957406 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.957415 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:25.957421 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:25.957480 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:26.020008 2088124 cri.go:89] found id: ""
	I1216 04:14:26.020036 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.020045 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:26.020051 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:26.020118 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:26.047424 2088124 cri.go:89] found id: ""
	I1216 04:14:26.047452 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.047461 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:26.047468 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:26.047534 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:26.073161 2088124 cri.go:89] found id: ""
	I1216 04:14:26.073187 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.073208 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:26.073216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:26.073277 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:26.103238 2088124 cri.go:89] found id: ""
	I1216 04:14:26.103260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.103268 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:26.103274 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:26.103337 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:26.128964 2088124 cri.go:89] found id: ""
	I1216 04:14:26.128993 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.129004 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:26.129013 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:26.129025 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:26.185309 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:26.185350 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:26.201116 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:26.201191 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:26.261346 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:26.253589    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.254367    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255430    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255945    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.257599    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:26.253589    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.254367    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255430    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255945    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.257599    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:26.261367 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:26.261379 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:26.286659 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:26.286693 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:28.816260 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:28.826799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:28.826873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:28.851396 2088124 cri.go:89] found id: ""
	I1216 04:14:28.851425 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.851435 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:28.851441 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:28.851503 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:28.875518 2088124 cri.go:89] found id: ""
	I1216 04:14:28.875541 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.875550 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:28.875556 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:28.875614 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:28.904430 2088124 cri.go:89] found id: ""
	I1216 04:14:28.904454 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.904462 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:28.904476 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:28.904537 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:28.929129 2088124 cri.go:89] found id: ""
	I1216 04:14:28.929153 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.929162 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:28.929169 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:28.929228 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:28.966014 2088124 cri.go:89] found id: ""
	I1216 04:14:28.966042 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.966051 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:28.966057 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:28.966123 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:29.025945 2088124 cri.go:89] found id: ""
	I1216 04:14:29.025972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.025988 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:29.025995 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:29.026064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:29.051899 2088124 cri.go:89] found id: ""
	I1216 04:14:29.051935 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.051946 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:29.051952 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:29.052023 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:29.080317 2088124 cri.go:89] found id: ""
	I1216 04:14:29.080341 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.080351 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:29.080361 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:29.080373 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:29.135930 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:29.135967 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:29.154187 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:29.154216 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:29.221073 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:29.212783    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.213403    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.214843    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.215170    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.216622    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:29.212783    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.213403    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.214843    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.215170    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.216622    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:29.221096 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:29.221111 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:29.246641 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:29.246676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:31.779202 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:31.790954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:31.791029 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:31.817812 2088124 cri.go:89] found id: ""
	I1216 04:14:31.817897 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.817925 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:31.817946 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:31.818067 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:31.842726 2088124 cri.go:89] found id: ""
	I1216 04:14:31.842753 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.842762 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:31.842769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:31.842832 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:31.868497 2088124 cri.go:89] found id: ""
	I1216 04:14:31.868523 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.868532 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:31.868538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:31.868602 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:31.898624 2088124 cri.go:89] found id: ""
	I1216 04:14:31.898646 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.898655 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:31.898662 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:31.898720 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:31.924967 2088124 cri.go:89] found id: ""
	I1216 04:14:31.924993 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.925003 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:31.925011 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:31.925074 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:31.966946 2088124 cri.go:89] found id: ""
	I1216 04:14:31.966972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.966981 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:31.966988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:31.967075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:31.999136 2088124 cri.go:89] found id: ""
	I1216 04:14:31.999162 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.999170 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:31.999177 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:31.999248 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:32.037224 2088124 cri.go:89] found id: ""
	I1216 04:14:32.037260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:32.037269 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:32.037280 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:32.037292 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:32.098221 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:32.098257 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:32.114315 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:32.114346 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:32.179522 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:32.170571    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.171240    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.172913    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.173537    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.175422    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:32.170571    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.171240    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.172913    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.173537    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.175422    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:32.179546 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:32.179598 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:32.205901 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:32.205937 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:34.736487 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:34.747033 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:34.747125 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:34.774783 2088124 cri.go:89] found id: ""
	I1216 04:14:34.774808 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.774817 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:34.774826 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:34.774892 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:34.804248 2088124 cri.go:89] found id: ""
	I1216 04:14:34.804272 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.804281 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:34.804294 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:34.804356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:34.829461 2088124 cri.go:89] found id: ""
	I1216 04:14:34.829485 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.829493 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:34.829499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:34.829560 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:34.857116 2088124 cri.go:89] found id: ""
	I1216 04:14:34.857141 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.857151 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:34.857157 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:34.857219 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:34.882336 2088124 cri.go:89] found id: ""
	I1216 04:14:34.882359 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.882367 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:34.882373 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:34.882434 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:34.907931 2088124 cri.go:89] found id: ""
	I1216 04:14:34.907954 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.907962 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:34.907969 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:34.908027 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:34.956047 2088124 cri.go:89] found id: ""
	I1216 04:14:34.956069 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.956077 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:34.956084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:34.956145 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:35.024159 2088124 cri.go:89] found id: ""
	I1216 04:14:35.024183 2088124 logs.go:282] 0 containers: []
	W1216 04:14:35.024197 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:35.024207 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:35.024218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:35.052560 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:35.052632 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:35.120169 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:35.109992    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.110996    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.112972    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.113360    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.115151    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:35.109992    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.110996    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.112972    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.113360    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.115151    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:35.120193 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:35.120206 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:35.148539 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:35.148572 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:35.177137 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:35.177163 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:37.736828 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:37.748034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:37.748119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:37.774072 2088124 cri.go:89] found id: ""
	I1216 04:14:37.774096 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.774105 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:37.774113 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:37.774174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:37.798854 2088124 cri.go:89] found id: ""
	I1216 04:14:37.798879 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.798887 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:37.798893 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:37.798953 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:37.824863 2088124 cri.go:89] found id: ""
	I1216 04:14:37.824889 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.824898 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:37.824905 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:37.824995 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:37.849318 2088124 cri.go:89] found id: ""
	I1216 04:14:37.849340 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.849348 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:37.849354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:37.849418 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:37.874246 2088124 cri.go:89] found id: ""
	I1216 04:14:37.874269 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.874277 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:37.874285 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:37.874343 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:37.900978 2088124 cri.go:89] found id: ""
	I1216 04:14:37.901002 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.901010 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:37.901016 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:37.901076 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:37.929331 2088124 cri.go:89] found id: ""
	I1216 04:14:37.929360 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.929370 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:37.929376 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:37.929440 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:37.969527 2088124 cri.go:89] found id: ""
	I1216 04:14:37.969556 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.969564 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:37.969573 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:37.969585 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:38.009528 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:38.009566 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:38.055850 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:38.055880 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:38.113260 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:38.113301 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:38.129810 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:38.129846 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:38.195392 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:38.187231    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.187971    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.189569    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.190023    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.191590    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:38.187231    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.187971    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.189569    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.190023    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.191590    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:40.695695 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:40.706489 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:40.706566 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:40.733370 2088124 cri.go:89] found id: ""
	I1216 04:14:40.733400 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.733409 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:40.733416 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:40.733476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:40.760997 2088124 cri.go:89] found id: ""
	I1216 04:14:40.761027 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.761037 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:40.761043 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:40.761106 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:40.785757 2088124 cri.go:89] found id: ""
	I1216 04:14:40.785785 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.785793 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:40.785799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:40.785859 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:40.810917 2088124 cri.go:89] found id: ""
	I1216 04:14:40.810946 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.810954 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:40.810961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:40.811021 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:40.837261 2088124 cri.go:89] found id: ""
	I1216 04:14:40.837289 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.837298 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:40.837306 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:40.837367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:40.865095 2088124 cri.go:89] found id: ""
	I1216 04:14:40.865124 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.865133 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:40.865139 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:40.865197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:40.893132 2088124 cri.go:89] found id: ""
	I1216 04:14:40.893156 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.893164 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:40.893170 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:40.893230 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:40.917368 2088124 cri.go:89] found id: ""
	I1216 04:14:40.917390 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.917398 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:40.917407 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:40.917418 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:40.988706 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:40.988789 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:41.026114 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:41.026141 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:41.097192 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:41.088410    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.089258    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.090961    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.091663    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.093296    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:41.088410    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.089258    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.090961    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.091663    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.093296    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:41.097218 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:41.097232 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:41.122894 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:41.122929 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:43.655609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:43.666076 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:43.666148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:43.697517 2088124 cri.go:89] found id: ""
	I1216 04:14:43.697542 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.697550 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:43.697557 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:43.697617 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:43.722700 2088124 cri.go:89] found id: ""
	I1216 04:14:43.722727 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.722737 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:43.722743 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:43.722811 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:43.751469 2088124 cri.go:89] found id: ""
	I1216 04:14:43.751496 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.751509 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:43.751516 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:43.751577 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:43.776779 2088124 cri.go:89] found id: ""
	I1216 04:14:43.776804 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.776812 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:43.776818 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:43.776876 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:43.801004 2088124 cri.go:89] found id: ""
	I1216 04:14:43.801028 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.801037 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:43.801044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:43.801131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:43.825723 2088124 cri.go:89] found id: ""
	I1216 04:14:43.825747 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.825756 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:43.825763 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:43.825823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:43.854440 2088124 cri.go:89] found id: ""
	I1216 04:14:43.854464 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.854473 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:43.854479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:43.854537 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:43.881228 2088124 cri.go:89] found id: ""
	I1216 04:14:43.881251 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.881261 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:43.881270 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:43.881282 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:43.908258 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:43.908330 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:43.975235 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:43.975273 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:44.032765 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:44.032798 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:44.097769 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:44.088888    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.089678    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.091397    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.092058    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.093573    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:44.088888    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.089678    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.091397    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.092058    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.093573    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:44.097791 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:44.097814 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:46.624214 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:46.634860 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:46.634939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:46.662490 2088124 cri.go:89] found id: ""
	I1216 04:14:46.662518 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.662528 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:46.662534 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:46.662598 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:46.687532 2088124 cri.go:89] found id: ""
	I1216 04:14:46.687558 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.687567 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:46.687574 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:46.687639 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:46.711951 2088124 cri.go:89] found id: ""
	I1216 04:14:46.711978 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.711988 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:46.711994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:46.712054 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:46.742207 2088124 cri.go:89] found id: ""
	I1216 04:14:46.742241 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.742250 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:46.742257 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:46.742331 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:46.766943 2088124 cri.go:89] found id: ""
	I1216 04:14:46.766972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.766981 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:46.766988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:46.767070 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:46.792400 2088124 cri.go:89] found id: ""
	I1216 04:14:46.792432 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.792442 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:46.792455 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:46.792533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:46.817511 2088124 cri.go:89] found id: ""
	I1216 04:14:46.817533 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.817542 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:46.817548 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:46.817610 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:46.845432 2088124 cri.go:89] found id: ""
	I1216 04:14:46.845455 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.845464 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:46.845473 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:46.845484 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:46.901017 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:46.901050 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:46.916980 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:46.917012 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:47.034196 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:47.019727    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.020515    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022325    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022651    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.026596    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:47.019727    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.020515    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022325    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022651    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.026596    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:47.034216 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:47.034230 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:47.060131 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:47.060167 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:49.592378 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:49.603274 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:49.603390 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:49.628592 2088124 cri.go:89] found id: ""
	I1216 04:14:49.628617 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.628626 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:49.628632 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:49.628693 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:49.654951 2088124 cri.go:89] found id: ""
	I1216 04:14:49.654974 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.654983 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:49.654990 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:49.655079 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:49.680966 2088124 cri.go:89] found id: ""
	I1216 04:14:49.680992 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.681004 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:49.681011 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:49.681077 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:49.705520 2088124 cri.go:89] found id: ""
	I1216 04:14:49.705549 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.705558 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:49.705565 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:49.705624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:49.735615 2088124 cri.go:89] found id: ""
	I1216 04:14:49.735643 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.735653 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:49.735660 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:49.735723 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:49.761693 2088124 cri.go:89] found id: ""
	I1216 04:14:49.761721 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.761730 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:49.761736 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:49.761799 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:49.786810 2088124 cri.go:89] found id: ""
	I1216 04:14:49.786852 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.786866 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:49.786875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:49.786943 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:49.815183 2088124 cri.go:89] found id: ""
	I1216 04:14:49.815209 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.815218 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:49.815236 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:49.815247 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:49.870316 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:49.870351 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:49.886698 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:49.886724 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:50.017086 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:49.989874    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.000829    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.004526    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.006532    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.011272    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:49.989874    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.000829    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.004526    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.006532    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.011272    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:50.017115 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:50.017137 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:50.046781 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:50.046822 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:52.580326 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:52.591108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:52.591184 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:52.619853 2088124 cri.go:89] found id: ""
	I1216 04:14:52.619876 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.619884 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:52.619891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:52.619973 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:52.644168 2088124 cri.go:89] found id: ""
	I1216 04:14:52.644191 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.644199 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:52.644205 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:52.644266 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:52.669818 2088124 cri.go:89] found id: ""
	I1216 04:14:52.669842 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.669850 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:52.669856 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:52.669916 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:52.695228 2088124 cri.go:89] found id: ""
	I1216 04:14:52.695252 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.695260 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:52.695267 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:52.695329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:52.720235 2088124 cri.go:89] found id: ""
	I1216 04:14:52.720260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.720269 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:52.720275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:52.720339 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:52.749551 2088124 cri.go:89] found id: ""
	I1216 04:14:52.749574 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.749582 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:52.749589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:52.749651 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:52.776351 2088124 cri.go:89] found id: ""
	I1216 04:14:52.776375 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.776383 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:52.776389 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:52.776450 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:52.805147 2088124 cri.go:89] found id: ""
	I1216 04:14:52.805175 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.805185 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:52.805195 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:52.805211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:52.831059 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:52.831098 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:52.861113 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:52.861143 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:52.916847 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:52.916883 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:52.933489 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:52.933517 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:53.043697 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:53.034203    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.035135    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.036921    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.037478    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.039232    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:53.034203    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.035135    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.036921    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.037478    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.039232    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:55.544026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:55.554861 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:55.554956 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:55.578474 2088124 cri.go:89] found id: ""
	I1216 04:14:55.578502 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.578511 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:55.578518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:55.578633 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:55.602756 2088124 cri.go:89] found id: ""
	I1216 04:14:55.602795 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.602804 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:55.602811 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:55.602900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:55.633011 2088124 cri.go:89] found id: ""
	I1216 04:14:55.633035 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.633043 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:55.633049 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:55.633136 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:55.658213 2088124 cri.go:89] found id: ""
	I1216 04:14:55.658247 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.658257 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:55.658280 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:55.658411 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:55.683154 2088124 cri.go:89] found id: ""
	I1216 04:14:55.683183 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.683201 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:55.683208 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:55.683280 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:55.707894 2088124 cri.go:89] found id: ""
	I1216 04:14:55.707968 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.707991 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:55.708010 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:55.708099 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:55.732419 2088124 cri.go:89] found id: ""
	I1216 04:14:55.732506 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.732531 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:55.732543 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:55.732624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:55.760911 2088124 cri.go:89] found id: ""
	I1216 04:14:55.760981 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.761007 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:55.761023 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:55.761038 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:55.817437 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:55.817473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:55.833374 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:55.833405 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:55.898151 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:55.890310    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.890838    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892319    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892862    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.894354    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:55.890310    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.890838    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892319    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892862    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.894354    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:55.898175 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:55.898195 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:55.923776 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:55.923810 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:58.462512 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:58.474113 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:58.474190 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:58.500558 2088124 cri.go:89] found id: ""
	I1216 04:14:58.500581 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.500590 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:58.500597 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:58.500659 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:58.525784 2088124 cri.go:89] found id: ""
	I1216 04:14:58.525809 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.525818 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:58.525824 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:58.525883 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:58.550534 2088124 cri.go:89] found id: ""
	I1216 04:14:58.550560 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.550570 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:58.550577 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:58.550634 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:58.577140 2088124 cri.go:89] found id: ""
	I1216 04:14:58.577167 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.577177 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:58.577184 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:58.577244 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:58.605864 2088124 cri.go:89] found id: ""
	I1216 04:14:58.605890 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.605904 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:58.605911 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:58.605975 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:58.634121 2088124 cri.go:89] found id: ""
	I1216 04:14:58.634152 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.634161 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:58.634168 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:58.634239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:58.660170 2088124 cri.go:89] found id: ""
	I1216 04:14:58.660198 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.660207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:58.660213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:58.660273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:58.685306 2088124 cri.go:89] found id: ""
	I1216 04:14:58.685333 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.685342 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:58.685351 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:58.685364 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:58.741326 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:58.741362 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:58.757562 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:58.757594 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:58.823813 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:58.815321    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.815799    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817390    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817823    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.819361    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:58.815321    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.815799    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817390    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817823    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.819361    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:58.823838 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:58.823854 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:58.849684 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:58.849722 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:01.379834 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:01.391065 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:01.391142 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:01.417501 2088124 cri.go:89] found id: ""
	I1216 04:15:01.417579 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.417602 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:01.417643 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:01.417737 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:01.448334 2088124 cri.go:89] found id: ""
	I1216 04:15:01.448360 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.448368 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:01.448375 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:01.448447 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:01.476980 2088124 cri.go:89] found id: ""
	I1216 04:15:01.477006 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.477015 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:01.477022 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:01.477108 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:01.501087 2088124 cri.go:89] found id: ""
	I1216 04:15:01.501110 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.501118 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:01.501125 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:01.501183 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:01.526116 2088124 cri.go:89] found id: ""
	I1216 04:15:01.526139 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.526147 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:01.526154 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:01.526217 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:01.552211 2088124 cri.go:89] found id: ""
	I1216 04:15:01.552234 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.552249 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:01.552255 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:01.552314 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:01.579190 2088124 cri.go:89] found id: ""
	I1216 04:15:01.579220 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.579229 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:01.579243 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:01.579362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:01.606084 2088124 cri.go:89] found id: ""
	I1216 04:15:01.606108 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.606118 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:01.606127 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:01.606139 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:01.638251 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:01.638281 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:01.698103 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:01.698145 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:01.714771 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:01.714858 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:01.780079 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:01.771727    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.772399    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774006    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774479    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.776016    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:01.771727    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.772399    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774006    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774479    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.776016    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:01.780150 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:01.780177 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:04.307354 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:04.318980 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:04.319082 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:04.348465 2088124 cri.go:89] found id: ""
	I1216 04:15:04.348496 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.348506 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:04.348513 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:04.348593 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:04.374442 2088124 cri.go:89] found id: ""
	I1216 04:15:04.374467 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.374476 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:04.374485 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:04.374543 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:04.401352 2088124 cri.go:89] found id: ""
	I1216 04:15:04.401376 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.401384 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:04.401390 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:04.401448 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:04.427946 2088124 cri.go:89] found id: ""
	I1216 04:15:04.427969 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.427978 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:04.427984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:04.428044 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:04.453439 2088124 cri.go:89] found id: ""
	I1216 04:15:04.453474 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.453483 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:04.453490 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:04.453549 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:04.478368 2088124 cri.go:89] found id: ""
	I1216 04:15:04.478395 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.478403 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:04.478409 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:04.478467 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:04.502274 2088124 cri.go:89] found id: ""
	I1216 04:15:04.502303 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.502312 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:04.502318 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:04.502379 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:04.526440 2088124 cri.go:89] found id: ""
	I1216 04:15:04.526467 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.526475 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:04.526484 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:04.526494 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:04.581559 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:04.581596 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:04.597786 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:04.597815 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:04.661194 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:04.653077    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.653620    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655229    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655719    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.657352    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:04.653077    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.653620    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655229    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655719    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.657352    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:04.661217 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:04.661230 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:04.686508 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:04.686544 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:07.214226 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:07.226828 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:07.226904 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:07.267774 2088124 cri.go:89] found id: ""
	I1216 04:15:07.267805 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.267814 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:07.267820 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:07.267880 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:07.293953 2088124 cri.go:89] found id: ""
	I1216 04:15:07.293980 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.293988 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:07.293994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:07.294052 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:07.317542 2088124 cri.go:89] found id: ""
	I1216 04:15:07.317568 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.317577 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:07.317583 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:07.317695 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:07.351422 2088124 cri.go:89] found id: ""
	I1216 04:15:07.351449 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.351458 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:07.351465 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:07.351552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:07.376043 2088124 cri.go:89] found id: ""
	I1216 04:15:07.376069 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.376092 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:07.376121 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:07.376204 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:07.400719 2088124 cri.go:89] found id: ""
	I1216 04:15:07.400749 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.400758 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:07.400765 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:07.400849 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:07.425726 2088124 cri.go:89] found id: ""
	I1216 04:15:07.425754 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.425763 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:07.425769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:07.425833 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:07.450385 2088124 cri.go:89] found id: ""
	I1216 04:15:07.450413 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.450422 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:07.450431 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:07.450444 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:07.482416 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:07.482446 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:07.543525 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:07.543569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:07.559963 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:07.559991 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:07.626193 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:07.617713    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.618478    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620010    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620349    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.621831    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:07.617713    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.618478    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620010    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620349    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.621831    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:07.626217 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:07.626233 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:10.151663 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:10.162850 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:10.162922 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:10.218459 2088124 cri.go:89] found id: ""
	I1216 04:15:10.218492 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.218502 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:10.218508 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:10.218581 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:10.266687 2088124 cri.go:89] found id: ""
	I1216 04:15:10.266716 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.266726 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:10.266732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:10.266794 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:10.297579 2088124 cri.go:89] found id: ""
	I1216 04:15:10.297607 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.297616 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:10.297623 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:10.297682 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:10.327612 2088124 cri.go:89] found id: ""
	I1216 04:15:10.327637 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.327646 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:10.327652 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:10.327710 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:10.352049 2088124 cri.go:89] found id: ""
	I1216 04:15:10.352073 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.352082 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:10.352088 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:10.352150 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:10.380981 2088124 cri.go:89] found id: ""
	I1216 04:15:10.381005 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.381013 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:10.381020 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:10.381083 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:10.405173 2088124 cri.go:89] found id: ""
	I1216 04:15:10.405198 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.405207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:10.405213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:10.405271 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:10.430194 2088124 cri.go:89] found id: ""
	I1216 04:15:10.430219 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.430248 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:10.430259 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:10.430272 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:10.486344 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:10.486381 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:10.502248 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:10.502278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:10.568856 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:10.561184    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.561736    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563232    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563538    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.565000    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:10.561184    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.561736    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563232    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563538    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.565000    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:10.568879 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:10.568893 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:10.595314 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:10.595349 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:13.125478 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:13.136862 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:13.136937 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:13.162398 2088124 cri.go:89] found id: ""
	I1216 04:15:13.162432 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.162442 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:13.162449 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:13.162512 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:13.213417 2088124 cri.go:89] found id: ""
	I1216 04:15:13.213443 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.213451 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:13.213457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:13.213515 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:13.265047 2088124 cri.go:89] found id: ""
	I1216 04:15:13.265074 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.265082 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:13.265089 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:13.265146 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:13.295404 2088124 cri.go:89] found id: ""
	I1216 04:15:13.295431 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.295442 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:13.295448 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:13.295510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:13.320244 2088124 cri.go:89] found id: ""
	I1216 04:15:13.320272 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.320281 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:13.320288 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:13.320347 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:13.343989 2088124 cri.go:89] found id: ""
	I1216 04:15:13.344013 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.344022 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:13.344028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:13.344088 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:13.367813 2088124 cri.go:89] found id: ""
	I1216 04:15:13.367838 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.367847 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:13.367854 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:13.367914 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:13.391747 2088124 cri.go:89] found id: ""
	I1216 04:15:13.391772 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.391782 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:13.391791 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:13.391802 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:13.416337 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:13.416373 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:13.443257 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:13.443286 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:13.501977 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:13.502016 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:13.517698 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:13.517730 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:13.580974 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:13.572384    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.573100    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.574739    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.575073    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.576736    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:13.572384    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.573100    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.574739    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.575073    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.576736    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:16.081274 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:16.092248 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:16.092325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:16.118104 2088124 cri.go:89] found id: ""
	I1216 04:15:16.118128 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.118138 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:16.118145 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:16.118207 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:16.148494 2088124 cri.go:89] found id: ""
	I1216 04:15:16.148519 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.148529 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:16.148535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:16.148600 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:16.177106 2088124 cri.go:89] found id: ""
	I1216 04:15:16.177133 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.177142 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:16.177148 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:16.177209 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:16.225478 2088124 cri.go:89] found id: ""
	I1216 04:15:16.225512 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.225521 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:16.225528 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:16.225601 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:16.263615 2088124 cri.go:89] found id: ""
	I1216 04:15:16.263642 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.263651 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:16.263657 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:16.263717 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:16.288816 2088124 cri.go:89] found id: ""
	I1216 04:15:16.288840 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.288849 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:16.288855 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:16.288915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:16.313866 2088124 cri.go:89] found id: ""
	I1216 04:15:16.313899 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.313909 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:16.313915 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:16.313986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:16.338822 2088124 cri.go:89] found id: ""
	I1216 04:15:16.338847 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.338865 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:16.338874 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:16.338886 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:16.397500 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:16.397535 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:16.413373 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:16.413401 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:16.481369 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:16.473361    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.474033    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475539    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475954    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.477408    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:16.473361    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.474033    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475539    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475954    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.477408    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:16.481391 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:16.481404 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:16.506768 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:16.506801 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:19.036905 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:19.047523 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:19.047594 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:19.071924 2088124 cri.go:89] found id: ""
	I1216 04:15:19.071947 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.071956 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:19.071963 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:19.072020 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:19.096694 2088124 cri.go:89] found id: ""
	I1216 04:15:19.096716 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.096736 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:19.096742 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:19.096808 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:19.122106 2088124 cri.go:89] found id: ""
	I1216 04:15:19.122129 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.122137 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:19.122144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:19.122204 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:19.151300 2088124 cri.go:89] found id: ""
	I1216 04:15:19.151327 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.151337 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:19.151346 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:19.151407 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:19.176879 2088124 cri.go:89] found id: ""
	I1216 04:15:19.176906 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.176915 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:19.176921 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:19.176982 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:19.248606 2088124 cri.go:89] found id: ""
	I1216 04:15:19.248637 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.248646 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:19.248654 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:19.248720 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:19.284067 2088124 cri.go:89] found id: ""
	I1216 04:15:19.284095 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.284105 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:19.284111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:19.284179 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:19.309536 2088124 cri.go:89] found id: ""
	I1216 04:15:19.309564 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.309573 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:19.309583 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:19.309595 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:19.336019 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:19.336059 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:19.363926 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:19.363997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:19.420745 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:19.420779 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:19.437274 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:19.437306 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:19.501939 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:19.493862    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.494691    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496168    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496603    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.498063    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:19.493862    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.494691    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496168    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496603    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.498063    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:22.002831 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:22.019000 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:22.019099 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:22.045729 2088124 cri.go:89] found id: ""
	I1216 04:15:22.045753 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.045762 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:22.045769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:22.045831 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:22.073468 2088124 cri.go:89] found id: ""
	I1216 04:15:22.073494 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.073504 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:22.073511 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:22.073572 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:22.099372 2088124 cri.go:89] found id: ""
	I1216 04:15:22.099397 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.099407 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:22.099413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:22.099475 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:22.124283 2088124 cri.go:89] found id: ""
	I1216 04:15:22.124358 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.124371 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:22.124378 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:22.124509 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:22.149430 2088124 cri.go:89] found id: ""
	I1216 04:15:22.149456 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.149466 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:22.149472 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:22.149532 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:22.179789 2088124 cri.go:89] found id: ""
	I1216 04:15:22.179813 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.179822 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:22.179829 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:22.179920 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:22.233299 2088124 cri.go:89] found id: ""
	I1216 04:15:22.233333 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.233342 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:22.233380 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:22.233495 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:22.281260 2088124 cri.go:89] found id: ""
	I1216 04:15:22.281287 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.281296 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:22.281305 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:22.281354 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:22.299880 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:22.299908 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:22.370389 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:22.359272    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.360819    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.361789    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.363665    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.365341    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:22.359272    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.360819    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.361789    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.363665    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.365341    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:22.370413 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:22.370427 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:22.395585 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:22.395618 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:22.423071 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:22.423103 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:24.979909 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:24.990414 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:24.990487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:25.022894 2088124 cri.go:89] found id: ""
	I1216 04:15:25.022933 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.022942 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:25.022950 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:25.023035 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:25.057555 2088124 cri.go:89] found id: ""
	I1216 04:15:25.057592 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.057602 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:25.057609 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:25.057674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:25.084421 2088124 cri.go:89] found id: ""
	I1216 04:15:25.084446 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.084455 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:25.084462 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:25.084534 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:25.112223 2088124 cri.go:89] found id: ""
	I1216 04:15:25.112249 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.112258 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:25.112266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:25.112340 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:25.138162 2088124 cri.go:89] found id: ""
	I1216 04:15:25.138186 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.138195 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:25.138202 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:25.138262 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:25.165660 2088124 cri.go:89] found id: ""
	I1216 04:15:25.165689 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.165698 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:25.165705 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:25.165775 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:25.213233 2088124 cri.go:89] found id: ""
	I1216 04:15:25.213260 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.213269 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:25.213275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:25.213333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:25.254540 2088124 cri.go:89] found id: ""
	I1216 04:15:25.254567 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.254576 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:25.254586 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:25.254599 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:25.290970 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:25.290997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:25.349010 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:25.349046 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:25.364592 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:25.364626 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:25.428643 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:25.420082    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.420822    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.422550    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.423262    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.424839    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:25.420082    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.420822    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.422550    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.423262    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.424839    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:25.428666 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:25.428680 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:27.954878 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:27.965363 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:27.965430 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:27.991310 2088124 cri.go:89] found id: ""
	I1216 04:15:27.991338 2088124 logs.go:282] 0 containers: []
	W1216 04:15:27.991347 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:27.991354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:27.991416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:28.017496 2088124 cri.go:89] found id: ""
	I1216 04:15:28.017519 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.017528 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:28.017535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:28.017600 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:28.043243 2088124 cri.go:89] found id: ""
	I1216 04:15:28.043267 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.043276 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:28.043282 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:28.043349 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:28.070592 2088124 cri.go:89] found id: ""
	I1216 04:15:28.070620 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.070629 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:28.070635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:28.070705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:28.096408 2088124 cri.go:89] found id: ""
	I1216 04:15:28.096430 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.096439 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:28.096446 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:28.096517 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:28.122523 2088124 cri.go:89] found id: ""
	I1216 04:15:28.122547 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.122556 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:28.122563 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:28.122627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:28.148233 2088124 cri.go:89] found id: ""
	I1216 04:15:28.148256 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.148264 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:28.148270 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:28.148335 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:28.174686 2088124 cri.go:89] found id: ""
	I1216 04:15:28.174715 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.174724 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:28.174733 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:28.174745 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:28.248922 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:28.249042 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:28.270319 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:28.270345 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:28.344544 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:28.335802    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.336388    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338081    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338450    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.339995    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:28.335802    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.336388    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338081    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338450    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.339995    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:28.344568 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:28.344583 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:28.370869 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:28.370905 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:30.901180 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:30.914236 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:30.914316 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:30.943226 2088124 cri.go:89] found id: ""
	I1216 04:15:30.943247 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.943255 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:30.943262 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:30.943320 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:30.969548 2088124 cri.go:89] found id: ""
	I1216 04:15:30.969573 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.969581 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:30.969588 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:30.969648 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:30.996727 2088124 cri.go:89] found id: ""
	I1216 04:15:30.996750 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.996759 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:30.996765 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:30.996823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:31.023099 2088124 cri.go:89] found id: ""
	I1216 04:15:31.023125 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.023133 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:31.023140 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:31.023202 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:31.052543 2088124 cri.go:89] found id: ""
	I1216 04:15:31.052568 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.052577 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:31.052584 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:31.052646 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:31.079096 2088124 cri.go:89] found id: ""
	I1216 04:15:31.079119 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.079128 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:31.079134 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:31.079197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:31.108706 2088124 cri.go:89] found id: ""
	I1216 04:15:31.108777 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.108801 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:31.108815 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:31.108894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:31.138097 2088124 cri.go:89] found id: ""
	I1216 04:15:31.138122 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.138130 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:31.138140 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:31.138152 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:31.163977 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:31.164066 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:31.220358 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:31.220432 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:31.291830 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:31.291912 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:31.307651 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:31.307678 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:31.376724 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:31.367014    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369142    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369906    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371443    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371922    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:31.367014    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369142    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369906    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371443    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371922    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:33.876969 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:33.887678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:33.887751 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:33.911475 2088124 cri.go:89] found id: ""
	I1216 04:15:33.911503 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.911513 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:33.911520 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:33.911581 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:33.936829 2088124 cri.go:89] found id: ""
	I1216 04:15:33.936852 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.936861 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:33.936866 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:33.936924 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:33.961061 2088124 cri.go:89] found id: ""
	I1216 04:15:33.961085 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.961094 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:33.961101 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:33.961168 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:33.985053 2088124 cri.go:89] found id: ""
	I1216 04:15:33.985078 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.985086 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:33.985093 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:33.985154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:34.015083 2088124 cri.go:89] found id: ""
	I1216 04:15:34.015112 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.015122 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:34.015129 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:34.015191 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:34.040899 2088124 cri.go:89] found id: ""
	I1216 04:15:34.040922 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.040930 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:34.040936 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:34.041001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:34.066663 2088124 cri.go:89] found id: ""
	I1216 04:15:34.066744 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.066771 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:34.066792 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:34.066877 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:34.092631 2088124 cri.go:89] found id: ""
	I1216 04:15:34.092708 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.092733 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:34.092749 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:34.092762 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:34.151180 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:34.151218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:34.167672 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:34.167704 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:34.288358 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:34.277084    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.277708    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.280379    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282393    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282865    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:34.277084    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.277708    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.280379    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282393    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282865    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:34.288382 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:34.288395 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:34.313627 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:34.313660 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:36.841874 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:36.852005 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:36.852078 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:36.875574 2088124 cri.go:89] found id: ""
	I1216 04:15:36.875598 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.875608 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:36.875614 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:36.875674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:36.904945 2088124 cri.go:89] found id: ""
	I1216 04:15:36.905021 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.905045 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:36.905057 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:36.905119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:36.930221 2088124 cri.go:89] found id: ""
	I1216 04:15:36.930249 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.930259 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:36.930266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:36.930326 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:36.955843 2088124 cri.go:89] found id: ""
	I1216 04:15:36.955870 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.955880 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:36.955887 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:36.955947 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:36.979492 2088124 cri.go:89] found id: ""
	I1216 04:15:36.979557 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.979583 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:36.979596 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:36.979667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:37.004015 2088124 cri.go:89] found id: ""
	I1216 04:15:37.004045 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.004056 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:37.004064 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:37.004144 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:37.033766 2088124 cri.go:89] found id: ""
	I1216 04:15:37.033841 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.033868 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:37.033887 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:37.033980 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:37.058994 2088124 cri.go:89] found id: ""
	I1216 04:15:37.059087 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.059115 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:37.059132 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:37.059146 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:37.121921 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:37.113226    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.113740    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115405    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115894    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.117543    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:37.113226    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.113740    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115405    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115894    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.117543    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:37.121943 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:37.121956 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:37.148246 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:37.148285 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:37.178974 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:37.179077 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:37.249870 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:37.249909 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:39.789446 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:39.800133 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:39.800214 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:39.824765 2088124 cri.go:89] found id: ""
	I1216 04:15:39.824794 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.824803 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:39.824810 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:39.824872 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:39.849338 2088124 cri.go:89] found id: ""
	I1216 04:15:39.849362 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.849370 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:39.849377 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:39.849435 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:39.873874 2088124 cri.go:89] found id: ""
	I1216 04:15:39.873902 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.873911 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:39.873917 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:39.873976 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:39.899109 2088124 cri.go:89] found id: ""
	I1216 04:15:39.899134 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.899143 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:39.899149 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:39.899210 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:39.924102 2088124 cri.go:89] found id: ""
	I1216 04:15:39.924128 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.924137 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:39.924143 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:39.924208 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:39.949033 2088124 cri.go:89] found id: ""
	I1216 04:15:39.949065 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.949074 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:39.949082 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:39.949144 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:39.975169 2088124 cri.go:89] found id: ""
	I1216 04:15:39.975198 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.975207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:39.975213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:39.975273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:40.028056 2088124 cri.go:89] found id: ""
	I1216 04:15:40.028085 2088124 logs.go:282] 0 containers: []
	W1216 04:15:40.028094 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:40.028104 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:40.028116 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:40.085250 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:40.085285 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:40.101589 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:40.101621 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:40.174562 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:40.165816    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.166569    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.167348    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.168955    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.169429    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:40.165816    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.166569    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.167348    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.168955    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.169429    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:40.174584 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:40.174599 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:40.202884 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:40.202920 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:42.752364 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:42.763300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:42.763369 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:42.792503 2088124 cri.go:89] found id: ""
	I1216 04:15:42.792529 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.792539 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:42.792545 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:42.792608 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:42.821201 2088124 cri.go:89] found id: ""
	I1216 04:15:42.821226 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.821235 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:42.821242 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:42.821304 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:42.847075 2088124 cri.go:89] found id: ""
	I1216 04:15:42.847102 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.847110 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:42.847117 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:42.847179 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:42.871486 2088124 cri.go:89] found id: ""
	I1216 04:15:42.871510 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.871519 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:42.871525 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:42.871589 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:42.896375 2088124 cri.go:89] found id: ""
	I1216 04:15:42.896402 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.896412 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:42.896418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:42.896505 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:42.921735 2088124 cri.go:89] found id: ""
	I1216 04:15:42.921811 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.921844 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:42.921865 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:42.921950 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:42.950925 2088124 cri.go:89] found id: ""
	I1216 04:15:42.950947 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.950955 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:42.950961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:42.951019 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:42.975785 2088124 cri.go:89] found id: ""
	I1216 04:15:42.975809 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.975817 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:42.975826 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:42.975840 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:42.991441 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:42.991473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:43.054494 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:43.046352    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.047054    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.048590    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.049073    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.050630    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:43.046352    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.047054    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.048590    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.049073    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.050630    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:43.054518 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:43.054532 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:43.079941 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:43.079979 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:43.107712 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:43.107738 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:45.663276 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:45.674206 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:45.674325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:45.698711 2088124 cri.go:89] found id: ""
	I1216 04:15:45.698736 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.698745 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:45.698752 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:45.698822 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:45.723389 2088124 cri.go:89] found id: ""
	I1216 04:15:45.723413 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.723422 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:45.723428 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:45.723494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:45.748842 2088124 cri.go:89] found id: ""
	I1216 04:15:45.748919 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.748935 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:45.748942 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:45.749002 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:45.777156 2088124 cri.go:89] found id: ""
	I1216 04:15:45.777236 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.777251 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:45.777265 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:45.777327 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:45.802462 2088124 cri.go:89] found id: ""
	I1216 04:15:45.802494 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.802503 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:45.802510 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:45.802583 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:45.829417 2088124 cri.go:89] found id: ""
	I1216 04:15:45.829442 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.829451 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:45.829458 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:45.829521 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:45.854934 2088124 cri.go:89] found id: ""
	I1216 04:15:45.854962 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.854971 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:45.854977 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:45.855095 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:45.879249 2088124 cri.go:89] found id: ""
	I1216 04:15:45.879272 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.879280 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:45.879289 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:45.879301 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:45.895118 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:45.895155 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:45.958262 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:45.949768    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.950592    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952181    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952681    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.954304    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:45.949768    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.950592    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952181    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952681    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.954304    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:45.958284 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:45.958298 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:45.984226 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:45.984260 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:46.015984 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:46.016011 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:48.576053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:48.586849 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:48.586923 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:48.612368 2088124 cri.go:89] found id: ""
	I1216 04:15:48.612394 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.612404 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:48.612410 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:48.612470 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:48.641259 2088124 cri.go:89] found id: ""
	I1216 04:15:48.641288 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.641297 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:48.641304 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:48.641368 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:48.665587 2088124 cri.go:89] found id: ""
	I1216 04:15:48.665614 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.665624 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:48.665629 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:48.665704 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:48.691123 2088124 cri.go:89] found id: ""
	I1216 04:15:48.691151 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.691160 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:48.691167 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:48.691227 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:48.716275 2088124 cri.go:89] found id: ""
	I1216 04:15:48.716304 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.716314 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:48.716320 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:48.716381 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:48.747209 2088124 cri.go:89] found id: ""
	I1216 04:15:48.747236 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.747244 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:48.747250 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:48.747312 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:48.776967 2088124 cri.go:89] found id: ""
	I1216 04:15:48.776991 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.777001 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:48.777010 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:48.777071 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:48.800940 2088124 cri.go:89] found id: ""
	I1216 04:15:48.800965 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.800975 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:48.800985 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:48.800997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:48.856499 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:48.856533 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:48.872208 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:48.872239 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:48.945493 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:48.936737    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.937621    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939381    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939979    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.941612    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:48.936737    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.937621    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939381    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939979    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.941612    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:48.945516 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:48.945529 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:48.970477 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:48.970510 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:51.499166 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:51.515506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:51.515579 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:51.540271 2088124 cri.go:89] found id: ""
	I1216 04:15:51.540297 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.540306 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:51.540313 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:51.540373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:51.564213 2088124 cri.go:89] found id: ""
	I1216 04:15:51.564235 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.564244 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:51.564250 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:51.564309 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:51.592901 2088124 cri.go:89] found id: ""
	I1216 04:15:51.592924 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.592933 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:51.592939 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:51.593001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:51.617803 2088124 cri.go:89] found id: ""
	I1216 04:15:51.617831 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.617840 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:51.617847 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:51.617906 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:51.643791 2088124 cri.go:89] found id: ""
	I1216 04:15:51.643814 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.643822 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:51.643830 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:51.643894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:51.669293 2088124 cri.go:89] found id: ""
	I1216 04:15:51.669324 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.669335 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:51.669345 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:51.669416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:51.697129 2088124 cri.go:89] found id: ""
	I1216 04:15:51.697155 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.697164 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:51.697170 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:51.697235 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:51.725605 2088124 cri.go:89] found id: ""
	I1216 04:15:51.725631 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.725640 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:51.725650 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:51.725664 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:51.781941 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:51.781976 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:51.798346 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:51.798372 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:51.861456 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:51.853947    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.854888    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.855930    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.856728    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.857532    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:51.853947    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.854888    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.855930    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.856728    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.857532    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:51.861478 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:51.861491 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:51.886476 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:51.886511 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:54.421185 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:54.432641 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:54.432721 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:54.487902 2088124 cri.go:89] found id: ""
	I1216 04:15:54.487936 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.487945 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:54.487952 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:54.488026 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:54.530347 2088124 cri.go:89] found id: ""
	I1216 04:15:54.530372 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.530381 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:54.530387 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:54.530450 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:54.558305 2088124 cri.go:89] found id: ""
	I1216 04:15:54.558339 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.558348 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:54.558354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:54.558423 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:54.584247 2088124 cri.go:89] found id: ""
	I1216 04:15:54.584271 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.584280 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:54.584286 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:54.584347 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:54.608497 2088124 cri.go:89] found id: ""
	I1216 04:15:54.608526 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.608536 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:54.608542 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:54.608601 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:54.634256 2088124 cri.go:89] found id: ""
	I1216 04:15:54.634283 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.634293 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:54.634301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:54.634360 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:54.659092 2088124 cri.go:89] found id: ""
	I1216 04:15:54.659132 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.659141 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:54.659148 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:54.659210 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:54.683797 2088124 cri.go:89] found id: ""
	I1216 04:15:54.683823 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.683832 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:54.683841 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:54.683852 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:54.713212 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:54.713238 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:54.769163 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:54.769199 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:54.784702 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:54.784742 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:54.855379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:54.846290    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.847296    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.848173    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.849670    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.850187    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:54.846290    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.847296    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.848173    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.849670    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.850187    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:54.855412 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:54.855425 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:57.382388 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:57.393144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:57.393234 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:57.418375 2088124 cri.go:89] found id: ""
	I1216 04:15:57.418443 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.418467 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:57.418486 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:57.418574 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:57.495590 2088124 cri.go:89] found id: ""
	I1216 04:15:57.495668 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.495694 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:57.495716 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:57.495813 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:57.536762 2088124 cri.go:89] found id: ""
	I1216 04:15:57.536786 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.536795 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:57.536801 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:57.536859 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:57.573379 2088124 cri.go:89] found id: ""
	I1216 04:15:57.573403 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.573412 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:57.573418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:57.573488 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:57.601415 2088124 cri.go:89] found id: ""
	I1216 04:15:57.601439 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.601447 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:57.601454 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:57.601514 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:57.625828 2088124 cri.go:89] found id: ""
	I1216 04:15:57.625852 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.625860 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:57.625866 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:57.625932 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:57.651508 2088124 cri.go:89] found id: ""
	I1216 04:15:57.651534 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.651543 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:57.651549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:57.651609 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:57.678194 2088124 cri.go:89] found id: ""
	I1216 04:15:57.678228 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.678242 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:57.678252 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:57.678287 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:57.733879 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:57.733916 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:57.750633 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:57.750661 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:57.828100 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:57.813970    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.814594    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.815982    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.822718    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.823836    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:57.813970    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.814594    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.815982    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.822718    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.823836    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:57.828131 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:57.828145 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:57.855013 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:57.855070 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:00.384284 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:00.398189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:00.398285 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:00.442307 2088124 cri.go:89] found id: ""
	I1216 04:16:00.442337 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.442347 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:00.442404 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:00.442487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:00.505962 2088124 cri.go:89] found id: ""
	I1216 04:16:00.505986 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.505994 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:00.506001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:00.506064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:00.548862 2088124 cri.go:89] found id: ""
	I1216 04:16:00.548940 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.548965 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:00.548984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:00.549098 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:00.576916 2088124 cri.go:89] found id: ""
	I1216 04:16:00.576939 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.576948 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:00.576954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:00.577013 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:00.602863 2088124 cri.go:89] found id: ""
	I1216 04:16:00.602891 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.602901 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:00.602907 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:00.602971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:00.628659 2088124 cri.go:89] found id: ""
	I1216 04:16:00.628688 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.628698 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:00.628705 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:00.628771 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:00.654429 2088124 cri.go:89] found id: ""
	I1216 04:16:00.654466 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.654475 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:00.654481 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:00.654556 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:00.679835 2088124 cri.go:89] found id: ""
	I1216 04:16:00.679863 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.679877 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:00.679890 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:00.679901 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:00.738456 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:00.738501 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:00.754802 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:00.754838 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:00.824660 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:00.815557    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.816379    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818197    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818825    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.820610    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:00.815557    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.816379    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818197    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818825    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.820610    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:00.824683 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:00.824698 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:00.850142 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:00.850176 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:03.377190 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:03.388732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:03.388827 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:03.417059 2088124 cri.go:89] found id: ""
	I1216 04:16:03.417082 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.417090 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:03.417096 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:03.417157 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:03.473568 2088124 cri.go:89] found id: ""
	I1216 04:16:03.473591 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.473599 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:03.473605 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:03.473676 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:03.510076 2088124 cri.go:89] found id: ""
	I1216 04:16:03.510097 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.510105 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:03.510111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:03.510170 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:03.546041 2088124 cri.go:89] found id: ""
	I1216 04:16:03.546063 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.546072 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:03.546086 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:03.546148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:03.574587 2088124 cri.go:89] found id: ""
	I1216 04:16:03.574672 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.574704 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:03.574747 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:03.574847 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:03.600940 2088124 cri.go:89] found id: ""
	I1216 04:16:03.600964 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.600973 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:03.600979 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:03.601041 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:03.626500 2088124 cri.go:89] found id: ""
	I1216 04:16:03.626524 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.626537 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:03.626544 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:03.626613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:03.651278 2088124 cri.go:89] found id: ""
	I1216 04:16:03.651345 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.651368 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:03.651386 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:03.651401 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:03.713437 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:03.704982    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.705525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707260    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707865    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.709525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:03.704982    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.705525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707260    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707865    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.709525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:03.713461 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:03.713476 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:03.739122 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:03.739183 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:03.769731 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:03.769761 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:03.825343 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:03.825379 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:06.341217 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:06.351622 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:06.351695 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:06.377192 2088124 cri.go:89] found id: ""
	I1216 04:16:06.377220 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.377229 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:06.377236 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:06.377298 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:06.407491 2088124 cri.go:89] found id: ""
	I1216 04:16:06.407516 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.407524 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:06.407530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:06.407587 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:06.432854 2088124 cri.go:89] found id: ""
	I1216 04:16:06.432881 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.432890 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:06.432896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:06.432954 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:06.508461 2088124 cri.go:89] found id: ""
	I1216 04:16:06.508483 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.508502 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:06.508510 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:06.508572 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:06.537008 2088124 cri.go:89] found id: ""
	I1216 04:16:06.537031 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.537039 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:06.537045 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:06.537102 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:06.563652 2088124 cri.go:89] found id: ""
	I1216 04:16:06.563723 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.563740 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:06.563747 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:06.563841 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:06.589523 2088124 cri.go:89] found id: ""
	I1216 04:16:06.589599 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.589623 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:06.589642 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:06.589725 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:06.615510 2088124 cri.go:89] found id: ""
	I1216 04:16:06.615577 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.615599 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:06.615623 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:06.615655 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:06.670726 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:06.670760 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:06.689463 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:06.689495 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:06.755339 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:06.747246    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.748070    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.749790    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.750091    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.751596    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:06.747246    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.748070    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.749790    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.750091    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.751596    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:06.755362 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:06.755375 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:06.780884 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:06.780917 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:09.313406 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:09.323603 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:09.323673 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:09.351600 2088124 cri.go:89] found id: ""
	I1216 04:16:09.351624 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.351632 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:09.351639 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:09.351699 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:09.375845 2088124 cri.go:89] found id: ""
	I1216 04:16:09.375869 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.375878 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:09.375885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:09.375950 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:09.400733 2088124 cri.go:89] found id: ""
	I1216 04:16:09.400756 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.400764 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:09.400770 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:09.400830 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:09.423762 2088124 cri.go:89] found id: ""
	I1216 04:16:09.423785 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.423793 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:09.423799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:09.423856 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:09.457898 2088124 cri.go:89] found id: ""
	I1216 04:16:09.457971 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.457993 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:09.458014 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:09.458132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:09.507416 2088124 cri.go:89] found id: ""
	I1216 04:16:09.507445 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.507453 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:09.507459 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:09.507518 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:09.542968 2088124 cri.go:89] found id: ""
	I1216 04:16:09.543084 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.543115 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:09.543169 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:09.543294 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:09.568289 2088124 cri.go:89] found id: ""
	I1216 04:16:09.568313 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.568321 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:09.568331 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:09.568343 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:09.630690 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:09.621816    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.622488    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624212    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624769    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.626372    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:09.621816    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.622488    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624212    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624769    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.626372    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:09.630716 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:09.630732 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:09.656388 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:09.656424 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:09.684126 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:09.684152 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:09.742624 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:09.742662 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:12.259263 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:12.269891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:12.269959 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:12.294506 2088124 cri.go:89] found id: ""
	I1216 04:16:12.294532 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.294541 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:12.294546 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:12.294628 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:12.318895 2088124 cri.go:89] found id: ""
	I1216 04:16:12.318924 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.318932 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:12.318938 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:12.318994 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:12.344134 2088124 cri.go:89] found id: ""
	I1216 04:16:12.344158 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.344167 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:12.344173 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:12.344234 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:12.368552 2088124 cri.go:89] found id: ""
	I1216 04:16:12.368574 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.368583 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:12.368590 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:12.368654 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:12.396826 2088124 cri.go:89] found id: ""
	I1216 04:16:12.396854 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.396863 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:12.396870 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:12.396931 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:12.422048 2088124 cri.go:89] found id: ""
	I1216 04:16:12.422076 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.422085 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:12.422092 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:12.422153 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:12.485647 2088124 cri.go:89] found id: ""
	I1216 04:16:12.485669 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.485677 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:12.485684 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:12.485750 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:12.529516 2088124 cri.go:89] found id: ""
	I1216 04:16:12.529539 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.529547 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:12.529557 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:12.529569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:12.545674 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:12.545705 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:12.608192 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:12.599988    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.600547    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602099    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602578    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.604093    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:12.599988    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.600547    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602099    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602578    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.604093    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:12.608257 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:12.608279 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:12.633428 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:12.633463 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:12.661070 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:12.661097 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:15.217877 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:15.228678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:15.228748 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:15.253119 2088124 cri.go:89] found id: ""
	I1216 04:16:15.253143 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.253152 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:15.253158 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:15.253220 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:15.285145 2088124 cri.go:89] found id: ""
	I1216 04:16:15.285168 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.285177 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:15.285183 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:15.285243 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:15.311311 2088124 cri.go:89] found id: ""
	I1216 04:16:15.311339 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.311348 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:15.311355 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:15.311416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:15.336241 2088124 cri.go:89] found id: ""
	I1216 04:16:15.336271 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.336286 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:15.336293 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:15.336354 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:15.362230 2088124 cri.go:89] found id: ""
	I1216 04:16:15.362258 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.362268 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:15.362275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:15.362334 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:15.387340 2088124 cri.go:89] found id: ""
	I1216 04:16:15.387362 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.387371 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:15.387377 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:15.387437 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:15.412173 2088124 cri.go:89] found id: ""
	I1216 04:16:15.412201 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.412210 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:15.412217 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:15.412281 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:15.454276 2088124 cri.go:89] found id: ""
	I1216 04:16:15.454354 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.454378 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:15.454404 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:15.454446 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:15.556767 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:15.556806 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:15.573628 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:15.573670 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:15.638801 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:15.629487    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.630048    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.631845    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.632428    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.634191    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:15.629487    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.630048    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.631845    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.632428    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.634191    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:15.638865 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:15.638886 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:15.663907 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:15.663944 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:18.197135 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:18.208099 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:18.208177 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:18.234350 2088124 cri.go:89] found id: ""
	I1216 04:16:18.234379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.234388 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:18.234394 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:18.234459 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:18.258985 2088124 cri.go:89] found id: ""
	I1216 04:16:18.259013 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.259022 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:18.259028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:18.259110 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:18.284132 2088124 cri.go:89] found id: ""
	I1216 04:16:18.284156 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.284164 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:18.284171 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:18.284230 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:18.309961 2088124 cri.go:89] found id: ""
	I1216 04:16:18.309989 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.309997 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:18.310004 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:18.310108 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:18.336186 2088124 cri.go:89] found id: ""
	I1216 04:16:18.336212 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.336221 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:18.336228 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:18.336289 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:18.361829 2088124 cri.go:89] found id: ""
	I1216 04:16:18.361858 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.361867 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:18.361874 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:18.361934 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:18.388363 2088124 cri.go:89] found id: ""
	I1216 04:16:18.388385 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.388394 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:18.388400 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:18.388463 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:18.416963 2088124 cri.go:89] found id: ""
	I1216 04:16:18.416988 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.416996 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:18.417006 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:18.417018 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:18.500995 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:18.503604 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:18.521452 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:18.521531 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:18.589729 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:18.580618    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.581508    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583296    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583964    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.585797    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:18.580618    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.581508    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583296    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583964    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.585797    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:18.589761 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:18.589775 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:18.616012 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:18.616047 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:21.144794 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:21.155656 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:21.155729 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:21.184379 2088124 cri.go:89] found id: ""
	I1216 04:16:21.184403 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.184411 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:21.184417 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:21.184484 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:21.210137 2088124 cri.go:89] found id: ""
	I1216 04:16:21.210163 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.210172 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:21.210178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:21.210240 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:21.235283 2088124 cri.go:89] found id: ""
	I1216 04:16:21.235307 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.235315 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:21.235321 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:21.235381 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:21.263715 2088124 cri.go:89] found id: ""
	I1216 04:16:21.263738 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.263746 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:21.263753 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:21.263823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:21.287600 2088124 cri.go:89] found id: ""
	I1216 04:16:21.287624 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.287632 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:21.287638 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:21.287698 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:21.315897 2088124 cri.go:89] found id: ""
	I1216 04:16:21.315919 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.315927 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:21.315934 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:21.315993 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:21.339842 2088124 cri.go:89] found id: ""
	I1216 04:16:21.339866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.339874 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:21.339880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:21.339939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:21.364501 2088124 cri.go:89] found id: ""
	I1216 04:16:21.364526 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.364535 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:21.364544 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:21.364556 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:21.379974 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:21.380060 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:21.474639 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:21.442912    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.443871    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.447436    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.448028    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.468365    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:21.442912    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.443871    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.447436    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.448028    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.468365    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:21.474664 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:21.474676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:21.531857 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:21.531938 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:21.561122 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:21.561149 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:24.116616 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:24.126986 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:24.127075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:24.154481 2088124 cri.go:89] found id: ""
	I1216 04:16:24.154507 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.154526 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:24.154533 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:24.154591 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:24.180064 2088124 cri.go:89] found id: ""
	I1216 04:16:24.180087 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.180095 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:24.180103 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:24.180165 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:24.205398 2088124 cri.go:89] found id: ""
	I1216 04:16:24.205424 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.205433 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:24.205440 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:24.205499 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:24.230340 2088124 cri.go:89] found id: ""
	I1216 04:16:24.230369 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.230377 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:24.230384 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:24.230445 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:24.255009 2088124 cri.go:89] found id: ""
	I1216 04:16:24.255056 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.255066 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:24.255072 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:24.255131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:24.280187 2088124 cri.go:89] found id: ""
	I1216 04:16:24.280214 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.280224 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:24.280230 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:24.280287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:24.304688 2088124 cri.go:89] found id: ""
	I1216 04:16:24.304711 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.304720 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:24.304726 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:24.304788 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:24.329482 2088124 cri.go:89] found id: ""
	I1216 04:16:24.329505 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.329514 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:24.329523 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:24.329535 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:24.345077 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:24.345106 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:24.410594 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:24.402842    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.403261    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.404850    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.405187    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.406756    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:24.402842    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.403261    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.404850    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.405187    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.406756    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:24.410665 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:24.410695 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:24.437142 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:24.437180 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:24.512425 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:24.512454 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:27.075945 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:27.086676 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:27.086751 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:27.111375 2088124 cri.go:89] found id: ""
	I1216 04:16:27.111402 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.111411 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:27.111418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:27.111479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:27.136068 2088124 cri.go:89] found id: ""
	I1216 04:16:27.136100 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.136109 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:27.136115 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:27.136174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:27.160473 2088124 cri.go:89] found id: ""
	I1216 04:16:27.160503 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.160513 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:27.160519 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:27.160580 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:27.186608 2088124 cri.go:89] found id: ""
	I1216 04:16:27.186632 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.186639 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:27.186646 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:27.186708 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:27.217149 2088124 cri.go:89] found id: ""
	I1216 04:16:27.217173 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.217182 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:27.217189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:27.217253 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:27.243558 2088124 cri.go:89] found id: ""
	I1216 04:16:27.243583 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.243592 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:27.243598 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:27.243665 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:27.269387 2088124 cri.go:89] found id: ""
	I1216 04:16:27.269415 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.269425 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:27.269433 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:27.269494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:27.296709 2088124 cri.go:89] found id: ""
	I1216 04:16:27.296778 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.296790 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:27.296800 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:27.296811 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:27.327331 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:27.327359 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:27.384171 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:27.384206 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:27.400922 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:27.400958 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:27.528794 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:27.519212    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.519883    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521482    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521988    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.523599    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:27.519212    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.519883    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521482    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521988    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.523599    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:27.528819 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:27.528835 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:30.057685 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:30.079715 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:30.079801 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:30.110026 2088124 cri.go:89] found id: ""
	I1216 04:16:30.110054 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.110063 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:30.110076 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:30.110143 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:30.137960 2088124 cri.go:89] found id: ""
	I1216 04:16:30.137986 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.137994 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:30.138001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:30.138065 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:30.165148 2088124 cri.go:89] found id: ""
	I1216 04:16:30.165177 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.165186 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:30.165194 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:30.165283 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:30.192836 2088124 cri.go:89] found id: ""
	I1216 04:16:30.192866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.192875 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:30.192883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:30.192951 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:30.220187 2088124 cri.go:89] found id: ""
	I1216 04:16:30.220213 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.220227 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:30.220233 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:30.220333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:30.247843 2088124 cri.go:89] found id: ""
	I1216 04:16:30.247872 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.247882 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:30.247889 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:30.247980 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:30.274429 2088124 cri.go:89] found id: ""
	I1216 04:16:30.274454 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.274463 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:30.274470 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:30.274583 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:30.302775 2088124 cri.go:89] found id: ""
	I1216 04:16:30.302809 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.302819 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:30.302844 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:30.302863 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:30.318968 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:30.318999 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:30.383767 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:30.374814    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.375258    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.376919    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.377544    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.379234    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:30.374814    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.375258    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.376919    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.377544    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.379234    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:30.383790 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:30.383804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:30.410095 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:30.410131 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:30.468723 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:30.468804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:33.056394 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:33.067079 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:33.067155 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:33.092150 2088124 cri.go:89] found id: ""
	I1216 04:16:33.092178 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.092188 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:33.092194 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:33.092260 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:33.117824 2088124 cri.go:89] found id: ""
	I1216 04:16:33.117852 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.117861 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:33.117868 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:33.117927 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:33.143646 2088124 cri.go:89] found id: ""
	I1216 04:16:33.143672 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.143680 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:33.143686 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:33.143744 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:33.169791 2088124 cri.go:89] found id: ""
	I1216 04:16:33.169818 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.169826 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:33.169833 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:33.169893 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:33.194288 2088124 cri.go:89] found id: ""
	I1216 04:16:33.194313 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.194323 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:33.194329 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:33.194388 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:33.221028 2088124 cri.go:89] found id: ""
	I1216 04:16:33.221062 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.221071 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:33.221078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:33.221178 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:33.245742 2088124 cri.go:89] found id: ""
	I1216 04:16:33.245769 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.245778 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:33.245784 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:33.245852 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:33.270847 2088124 cri.go:89] found id: ""
	I1216 04:16:33.270870 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.270879 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:33.270888 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:33.270899 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:33.327247 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:33.327283 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:33.342917 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:33.342947 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:33.407775 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:33.399278   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.399947   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401468   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401954   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.403432   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:33.399278   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.399947   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401468   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401954   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.403432   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:33.407796 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:33.407809 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:33.433956 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:33.433990 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:36.019705 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:36.031406 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:36.031494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:36.061621 2088124 cri.go:89] found id: ""
	I1216 04:16:36.061647 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.061657 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:36.061664 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:36.061730 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:36.088137 2088124 cri.go:89] found id: ""
	I1216 04:16:36.088162 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.088171 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:36.088178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:36.088239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:36.113810 2088124 cri.go:89] found id: ""
	I1216 04:16:36.113833 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.113842 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:36.113849 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:36.113913 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:36.139840 2088124 cri.go:89] found id: ""
	I1216 04:16:36.139866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.139874 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:36.139883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:36.139965 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:36.168529 2088124 cri.go:89] found id: ""
	I1216 04:16:36.168553 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.168561 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:36.168567 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:36.168627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:36.196976 2088124 cri.go:89] found id: ""
	I1216 04:16:36.197002 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.197027 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:36.197050 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:36.197133 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:36.221877 2088124 cri.go:89] found id: ""
	I1216 04:16:36.221903 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.221912 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:36.221918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:36.222032 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:36.248921 2088124 cri.go:89] found id: ""
	I1216 04:16:36.248947 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.248956 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:36.248966 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:36.248977 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:36.264593 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:36.264622 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:36.329217 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:36.319688   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.320663   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.322347   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.323118   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.324854   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:36.319688   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.320663   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.322347   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.323118   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.324854   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:36.329239 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:36.329252 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:36.354482 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:36.354514 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:36.382824 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:36.382890 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:38.944004 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:38.957491 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:38.957613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:38.982761 2088124 cri.go:89] found id: ""
	I1216 04:16:38.982787 2088124 logs.go:282] 0 containers: []
	W1216 04:16:38.982796 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:38.982803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:38.982861 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:39.010506 2088124 cri.go:89] found id: ""
	I1216 04:16:39.010532 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.010542 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:39.010549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:39.010630 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:39.035827 2088124 cri.go:89] found id: ""
	I1216 04:16:39.035853 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.035862 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:39.035875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:39.035934 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:39.060421 2088124 cri.go:89] found id: ""
	I1216 04:16:39.060448 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.060457 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:39.060463 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:39.060550 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:39.087481 2088124 cri.go:89] found id: ""
	I1216 04:16:39.087504 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.087512 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:39.087518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:39.087577 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:39.111994 2088124 cri.go:89] found id: ""
	I1216 04:16:39.112028 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.112037 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:39.112044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:39.112114 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:39.136060 2088124 cri.go:89] found id: ""
	I1216 04:16:39.136093 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.136101 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:39.136108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:39.136186 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:39.166063 2088124 cri.go:89] found id: ""
	I1216 04:16:39.166090 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.166099 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:39.166109 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:39.166120 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:39.222912 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:39.222949 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:39.239064 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:39.239096 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:39.305289 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:39.297129   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.297723   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299366   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299828   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.301323   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:39.297129   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.297723   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299366   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299828   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.301323   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:39.305312 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:39.305326 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:39.330965 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:39.330997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:41.862236 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:41.873016 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:41.873089 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:41.900650 2088124 cri.go:89] found id: ""
	I1216 04:16:41.900675 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.900684 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:41.900691 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:41.900754 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:41.924986 2088124 cri.go:89] found id: ""
	I1216 04:16:41.925012 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.925022 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:41.925028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:41.925090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:41.950157 2088124 cri.go:89] found id: ""
	I1216 04:16:41.950182 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.950191 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:41.950197 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:41.950257 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:41.975738 2088124 cri.go:89] found id: ""
	I1216 04:16:41.975763 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.975772 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:41.975778 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:41.975837 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:42.008172 2088124 cri.go:89] found id: ""
	I1216 04:16:42.008203 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.008214 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:42.008221 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:42.008295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:42.036816 2088124 cri.go:89] found id: ""
	I1216 04:16:42.036841 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.036851 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:42.036858 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:42.036969 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:42.066668 2088124 cri.go:89] found id: ""
	I1216 04:16:42.066697 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.066706 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:42.066713 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:42.066787 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:42.098167 2088124 cri.go:89] found id: ""
	I1216 04:16:42.098200 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.098217 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:42.098231 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:42.098245 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:42.184589 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:42.173723   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.175193   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.176598   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.177621   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.179728   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:42.173723   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.175193   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.176598   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.177621   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.179728   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:42.184617 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:42.184635 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:42.214306 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:42.214348 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:42.253172 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:42.253203 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:42.312705 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:42.312757 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:44.831426 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:44.842214 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:44.842287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:44.872804 2088124 cri.go:89] found id: ""
	I1216 04:16:44.872833 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.872843 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:44.872851 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:44.872915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:44.903988 2088124 cri.go:89] found id: ""
	I1216 04:16:44.904064 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.904089 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:44.904108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:44.904200 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:44.930758 2088124 cri.go:89] found id: ""
	I1216 04:16:44.930837 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.930861 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:44.930880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:44.930971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:44.955785 2088124 cri.go:89] found id: ""
	I1216 04:16:44.955809 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.955817 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:44.955823 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:44.955883 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:44.983685 2088124 cri.go:89] found id: ""
	I1216 04:16:44.983762 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.983785 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:44.983800 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:44.983876 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:45.034599 2088124 cri.go:89] found id: ""
	I1216 04:16:45.034623 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.034631 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:45.034639 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:45.034713 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:45.106900 2088124 cri.go:89] found id: ""
	I1216 04:16:45.106927 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.106937 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:45.106945 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:45.107019 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:45.148790 2088124 cri.go:89] found id: ""
	I1216 04:16:45.148816 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.148826 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:45.148837 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:45.148851 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:45.242114 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:45.242166 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:45.275372 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:45.275416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:45.355175 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:45.346532   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.347206   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.348772   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.349213   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.350684   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:45.346532   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.347206   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.348772   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.349213   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.350684   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:45.355241 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:45.355263 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:45.382211 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:45.382248 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:47.915609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:47.927521 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:47.927603 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:47.957165 2088124 cri.go:89] found id: ""
	I1216 04:16:47.957192 2088124 logs.go:282] 0 containers: []
	W1216 04:16:47.957205 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:47.957212 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:47.957278 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:47.983356 2088124 cri.go:89] found id: ""
	I1216 04:16:47.983379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:47.983396 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:47.983408 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:47.983475 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:48.012782 2088124 cri.go:89] found id: ""
	I1216 04:16:48.012807 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.012815 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:48.012822 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:48.012887 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:48.042072 2088124 cri.go:89] found id: ""
	I1216 04:16:48.042096 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.042105 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:48.042111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:48.042172 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:48.066925 2088124 cri.go:89] found id: ""
	I1216 04:16:48.066954 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.066963 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:48.066970 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:48.067032 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:48.097340 2088124 cri.go:89] found id: ""
	I1216 04:16:48.097366 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.097378 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:48.097385 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:48.097470 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:48.126364 2088124 cri.go:89] found id: ""
	I1216 04:16:48.126397 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.126407 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:48.126413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:48.126510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:48.152175 2088124 cri.go:89] found id: ""
	I1216 04:16:48.152199 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.152207 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:48.152217 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:48.152232 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:48.216814 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:48.216861 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:48.235153 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:48.235187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:48.303336 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:48.295476   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.296105   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.297650   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.298118   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.299616   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:48.295476   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.296105   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.297650   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.298118   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.299616   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:48.303404 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:48.303433 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:48.332107 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:48.332175 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:50.863912 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:50.876115 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:50.876205 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:50.902170 2088124 cri.go:89] found id: ""
	I1216 04:16:50.902200 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.902209 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:50.902216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:50.902273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:50.925870 2088124 cri.go:89] found id: ""
	I1216 04:16:50.925903 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.925912 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:50.925918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:50.925986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:50.950257 2088124 cri.go:89] found id: ""
	I1216 04:16:50.950283 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.950293 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:50.950299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:50.950358 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:50.975507 2088124 cri.go:89] found id: ""
	I1216 04:16:50.975531 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.975541 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:50.975547 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:50.975607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:50.999494 2088124 cri.go:89] found id: ""
	I1216 04:16:50.999520 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.999529 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:50.999535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:50.999599 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:51.026658 2088124 cri.go:89] found id: ""
	I1216 04:16:51.026685 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.026694 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:51.026701 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:51.026760 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:51.051749 2088124 cri.go:89] found id: ""
	I1216 04:16:51.051775 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.051784 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:51.051790 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:51.051868 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:51.076898 2088124 cri.go:89] found id: ""
	I1216 04:16:51.076927 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.076938 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:51.076948 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:51.076960 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:51.103255 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:51.103293 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:51.134833 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:51.134859 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:51.193704 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:51.193741 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:51.212900 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:51.212928 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:51.297351 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:51.288083   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.288656   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.289678   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291273   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291873   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:51.288083   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.288656   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.289678   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291273   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291873   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:53.797612 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:53.808331 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:53.808407 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:53.832739 2088124 cri.go:89] found id: ""
	I1216 04:16:53.832807 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.832829 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:53.832850 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:53.832945 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:53.857832 2088124 cri.go:89] found id: ""
	I1216 04:16:53.857869 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.857878 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:53.857885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:53.857954 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:53.885064 2088124 cri.go:89] found id: ""
	I1216 04:16:53.885087 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.885095 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:53.885101 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:53.885158 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:53.913372 2088124 cri.go:89] found id: ""
	I1216 04:16:53.913451 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.913475 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:53.913493 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:53.913586 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:53.940577 2088124 cri.go:89] found id: ""
	I1216 04:16:53.940646 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.940673 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:53.940687 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:53.940764 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:53.966496 2088124 cri.go:89] found id: ""
	I1216 04:16:53.966534 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.966543 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:53.966552 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:53.966623 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:53.992796 2088124 cri.go:89] found id: ""
	I1216 04:16:53.992820 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.992828 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:53.992834 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:53.992896 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:54.019752 2088124 cri.go:89] found id: ""
	I1216 04:16:54.019840 2088124 logs.go:282] 0 containers: []
	W1216 04:16:54.019857 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:54.019868 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:54.019880 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:54.079349 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:54.079394 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:54.098509 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:54.098593 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:54.166447 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:54.157535   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.158625   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160232   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160884   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.162506   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:54.157535   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.158625   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160232   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160884   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.162506   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:54.166510 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:54.166549 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:54.191683 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:54.191718 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:56.719163 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:56.748538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:56.748613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:56.786218 2088124 cri.go:89] found id: ""
	I1216 04:16:56.786244 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.786253 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:56.786259 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:56.786320 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:56.812994 2088124 cri.go:89] found id: ""
	I1216 04:16:56.813016 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.813024 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:56.813031 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:56.813090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:56.841729 2088124 cri.go:89] found id: ""
	I1216 04:16:56.841751 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.841760 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:56.841766 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:56.841825 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:56.870356 2088124 cri.go:89] found id: ""
	I1216 04:16:56.870379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.870387 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:56.870393 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:56.870451 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:56.899841 2088124 cri.go:89] found id: ""
	I1216 04:16:56.899867 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.899877 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:56.899883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:56.899943 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:56.924316 2088124 cri.go:89] found id: ""
	I1216 04:16:56.924343 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.924352 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:56.924359 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:56.924417 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:56.948789 2088124 cri.go:89] found id: ""
	I1216 04:16:56.948815 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.948824 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:56.948830 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:56.948891 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:56.977394 2088124 cri.go:89] found id: ""
	I1216 04:16:56.977423 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.977432 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:56.977441 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:56.977453 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:57.032732 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:57.032770 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:57.048273 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:57.048302 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:57.115644 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:57.106949   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.107590   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109199   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109751   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.111454   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:57.106949   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.107590   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109199   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109751   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.111454   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:57.115665 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:57.115685 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:57.140936 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:57.140971 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:59.669285 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:59.682343 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:59.682415 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:59.722722 2088124 cri.go:89] found id: ""
	I1216 04:16:59.722750 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.722758 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:59.722764 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:59.722824 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:59.778634 2088124 cri.go:89] found id: ""
	I1216 04:16:59.778659 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.778667 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:59.778674 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:59.778733 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:59.817378 2088124 cri.go:89] found id: ""
	I1216 04:16:59.817470 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.817498 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:59.817538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:59.817644 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:59.848330 2088124 cri.go:89] found id: ""
	I1216 04:16:59.848356 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.848365 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:59.848372 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:59.848459 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:59.880033 2088124 cri.go:89] found id: ""
	I1216 04:16:59.880061 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.880074 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:59.880080 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:59.880154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:59.909206 2088124 cri.go:89] found id: ""
	I1216 04:16:59.909231 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.909241 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:59.909248 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:59.909351 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:59.934604 2088124 cri.go:89] found id: ""
	I1216 04:16:59.934630 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.934639 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:59.934646 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:59.934708 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:59.959916 2088124 cri.go:89] found id: ""
	I1216 04:16:59.959994 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.960011 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:59.960022 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:59.960035 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:00.015911 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:00.016018 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:00.105766 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:00.105818 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:00.319730 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:00.290488   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.291031   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.293087   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.295468   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.296159   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:00.290488   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.291031   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.293087   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.295468   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.296159   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:00.319780 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:00.319793 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:00.371509 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:00.371569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:02.957388 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:02.969075 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:02.969174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:02.996244 2088124 cri.go:89] found id: ""
	I1216 04:17:02.996268 2088124 logs.go:282] 0 containers: []
	W1216 04:17:02.996276 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:02.996283 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:02.996351 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:03.035674 2088124 cri.go:89] found id: ""
	I1216 04:17:03.035699 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.035709 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:03.035716 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:03.035786 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:03.063231 2088124 cri.go:89] found id: ""
	I1216 04:17:03.063262 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.063271 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:03.063278 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:03.063348 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:03.090248 2088124 cri.go:89] found id: ""
	I1216 04:17:03.090277 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.090285 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:03.090292 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:03.090357 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:03.118599 2088124 cri.go:89] found id: ""
	I1216 04:17:03.118628 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.118637 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:03.118643 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:03.118705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:03.145364 2088124 cri.go:89] found id: ""
	I1216 04:17:03.145394 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.145403 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:03.145411 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:03.145476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:03.174022 2088124 cri.go:89] found id: ""
	I1216 04:17:03.174047 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.174057 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:03.174064 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:03.174132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:03.201495 2088124 cri.go:89] found id: ""
	I1216 04:17:03.201518 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.201527 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:03.201537 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:03.201549 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:03.259166 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:03.259202 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:03.276281 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:03.276319 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:03.347465 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:03.338991   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.339696   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341341   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341888   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.343162   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:03.338991   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.339696   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341341   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341888   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.343162   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:03.347486 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:03.347499 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:03.374421 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:03.374460 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:05.905789 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:05.917930 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:05.918028 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:05.944068 2088124 cri.go:89] found id: ""
	I1216 04:17:05.944092 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.944100 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:05.944106 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:05.944170 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:05.971887 2088124 cri.go:89] found id: ""
	I1216 04:17:05.971915 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.971924 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:05.971931 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:05.971998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:05.999415 2088124 cri.go:89] found id: ""
	I1216 04:17:05.999452 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.999467 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:05.999474 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:05.999547 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:06.038021 2088124 cri.go:89] found id: ""
	I1216 04:17:06.038109 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.038128 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:06.038138 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:06.038231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:06.069582 2088124 cri.go:89] found id: ""
	I1216 04:17:06.069610 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.069620 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:06.069626 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:06.069702 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:06.102728 2088124 cri.go:89] found id: ""
	I1216 04:17:06.102753 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.102763 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:06.102770 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:06.102846 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:06.131178 2088124 cri.go:89] found id: ""
	I1216 04:17:06.131372 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.131401 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:06.131420 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:06.131527 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:06.158881 2088124 cri.go:89] found id: ""
	I1216 04:17:06.158966 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.158996 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:06.159061 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:06.159098 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:06.185524 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:06.185554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:06.221206 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:06.221235 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:06.280309 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:06.280357 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:06.297032 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:06.297065 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:06.363186 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:06.354862   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.355590   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357189   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357554   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.359080   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:06.354862   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.355590   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357189   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357554   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.359080   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:08.864854 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:08.875530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:08.875607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:08.900341 2088124 cri.go:89] found id: ""
	I1216 04:17:08.900376 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.900386 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:08.900392 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:08.900453 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:08.924614 2088124 cri.go:89] found id: ""
	I1216 04:17:08.924638 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.924647 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:08.924653 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:08.924715 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:08.949702 2088124 cri.go:89] found id: ""
	I1216 04:17:08.949729 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.949738 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:08.949744 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:08.949803 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:08.973818 2088124 cri.go:89] found id: ""
	I1216 04:17:08.973848 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.973858 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:08.973864 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:08.973923 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:08.999010 2088124 cri.go:89] found id: ""
	I1216 04:17:08.999033 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.999079 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:08.999087 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:08.999149 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:09.030095 2088124 cri.go:89] found id: ""
	I1216 04:17:09.030122 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.030131 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:09.030138 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:09.030198 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:09.054300 2088124 cri.go:89] found id: ""
	I1216 04:17:09.054324 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.054332 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:09.054339 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:09.054397 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:09.078301 2088124 cri.go:89] found id: ""
	I1216 04:17:09.078328 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.078337 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:09.078346 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:09.078358 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:09.106185 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:09.106220 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:09.161474 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:09.161513 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:09.177365 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:09.177394 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:09.242353 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:09.233841   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.234299   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236131   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236653   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.238465   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:09.233841   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.234299   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236131   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236653   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.238465   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:09.242378 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:09.242392 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:11.767582 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:11.779587 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:11.779667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:11.806280 2088124 cri.go:89] found id: ""
	I1216 04:17:11.806308 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.806317 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:11.806323 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:11.806386 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:11.831161 2088124 cri.go:89] found id: ""
	I1216 04:17:11.831187 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.831196 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:11.831203 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:11.831262 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:11.859758 2088124 cri.go:89] found id: ""
	I1216 04:17:11.859781 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.859790 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:11.859796 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:11.859853 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:11.884445 2088124 cri.go:89] found id: ""
	I1216 04:17:11.884473 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.884483 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:11.884489 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:11.884567 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:11.909783 2088124 cri.go:89] found id: ""
	I1216 04:17:11.909860 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.909886 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:11.909904 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:11.909989 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:11.934802 2088124 cri.go:89] found id: ""
	I1216 04:17:11.934833 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.934842 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:11.934848 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:11.934909 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:11.961240 2088124 cri.go:89] found id: ""
	I1216 04:17:11.961318 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.961344 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:11.961358 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:11.961431 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:11.985352 2088124 cri.go:89] found id: ""
	I1216 04:17:11.985380 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.985389 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:11.985404 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:11.985416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:12.050891 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:12.042955   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.043613   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045154   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045461   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.046975   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:12.042955   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.043613   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045154   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045461   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.046975   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:12.050912 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:12.050925 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:12.076153 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:12.076186 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:12.108364 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:12.108393 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:12.164122 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:12.164161 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:14.681316 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:14.698056 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:14.698131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:14.764358 2088124 cri.go:89] found id: ""
	I1216 04:17:14.764382 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.764391 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:14.764397 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:14.764468 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:14.792079 2088124 cri.go:89] found id: ""
	I1216 04:17:14.792110 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.792120 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:14.792130 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:14.792197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:14.817831 2088124 cri.go:89] found id: ""
	I1216 04:17:14.817857 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.817867 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:14.817875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:14.817935 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:14.846609 2088124 cri.go:89] found id: ""
	I1216 04:17:14.846638 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.846646 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:14.846653 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:14.846712 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:14.871213 2088124 cri.go:89] found id: ""
	I1216 04:17:14.871237 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.871246 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:14.871255 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:14.871313 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:14.896165 2088124 cri.go:89] found id: ""
	I1216 04:17:14.896192 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.896201 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:14.896208 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:14.896269 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:14.922595 2088124 cri.go:89] found id: ""
	I1216 04:17:14.922621 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.922629 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:14.922635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:14.922698 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:14.949236 2088124 cri.go:89] found id: ""
	I1216 04:17:14.949303 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.949327 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:14.949344 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:14.949356 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:15.027151 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:15.009633   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.011301   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.012737   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.013346   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.016022   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:15.009633   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.011301   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.012737   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.013346   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.016022   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:15.027238 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:15.027269 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:15.060605 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:15.060646 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:15.093643 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:15.093728 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:15.150597 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:15.150635 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:17.668643 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:17.679947 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:17.680020 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:17.722386 2088124 cri.go:89] found id: ""
	I1216 04:17:17.722409 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.722417 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:17.722423 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:17.722487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:17.775941 2088124 cri.go:89] found id: ""
	I1216 04:17:17.775964 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.775974 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:17.775980 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:17.776040 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:17.802436 2088124 cri.go:89] found id: ""
	I1216 04:17:17.802458 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.802467 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:17.802473 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:17.802532 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:17.828371 2088124 cri.go:89] found id: ""
	I1216 04:17:17.828399 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.828409 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:17.828415 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:17.828479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:17.853344 2088124 cri.go:89] found id: ""
	I1216 04:17:17.853370 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.853379 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:17.853386 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:17.853479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:17.881429 2088124 cri.go:89] found id: ""
	I1216 04:17:17.881456 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.881465 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:17.881471 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:17.881533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:17.904862 2088124 cri.go:89] found id: ""
	I1216 04:17:17.904938 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.904961 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:17.904975 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:17.905050 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:17.929897 2088124 cri.go:89] found id: ""
	I1216 04:17:17.929977 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.930001 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:17.930028 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:17.930064 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:17.998744 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:17.990296   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.990760   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.992790   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.993233   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.994439   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:17.990296   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.990760   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.992790   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.993233   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.994439   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:17.998813 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:17.998840 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:18.026132 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:18.026171 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:18.058645 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:18.058676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:18.115432 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:18.115467 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:20.631899 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:20.643452 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:20.643535 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:20.668165 2088124 cri.go:89] found id: ""
	I1216 04:17:20.668190 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.668199 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:20.668205 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:20.668263 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:20.724732 2088124 cri.go:89] found id: ""
	I1216 04:17:20.724759 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.724768 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:20.724774 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:20.724845 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:20.771015 2088124 cri.go:89] found id: ""
	I1216 04:17:20.771058 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.771068 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:20.771075 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:20.771155 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:20.805632 2088124 cri.go:89] found id: ""
	I1216 04:17:20.805662 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.805672 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:20.805679 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:20.805747 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:20.835160 2088124 cri.go:89] found id: ""
	I1216 04:17:20.835226 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.835242 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:20.835249 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:20.835308 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:20.861499 2088124 cri.go:89] found id: ""
	I1216 04:17:20.861522 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.861531 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:20.861538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:20.861595 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:20.885895 2088124 cri.go:89] found id: ""
	I1216 04:17:20.885919 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.885928 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:20.885934 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:20.885998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:20.910445 2088124 cri.go:89] found id: ""
	I1216 04:17:20.910468 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.910477 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:20.910486 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:20.910498 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:20.966176 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:20.966211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:20.983062 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:20.983092 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:21.049819 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:21.041149   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.041819   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.043484   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.044157   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.045775   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:21.041149   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.041819   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.043484   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.044157   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.045775   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:21.049842 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:21.049856 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:21.075330 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:21.075370 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:23.603121 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:23.613760 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:23.613834 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:23.642856 2088124 cri.go:89] found id: ""
	I1216 04:17:23.642882 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.642890 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:23.642897 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:23.642957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:23.671150 2088124 cri.go:89] found id: ""
	I1216 04:17:23.671175 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.671183 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:23.671189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:23.671247 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:23.733230 2088124 cri.go:89] found id: ""
	I1216 04:17:23.733256 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.733265 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:23.733271 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:23.733330 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:23.782653 2088124 cri.go:89] found id: ""
	I1216 04:17:23.782679 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.782688 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:23.782694 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:23.782759 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:23.810224 2088124 cri.go:89] found id: ""
	I1216 04:17:23.810249 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.810259 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:23.810266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:23.810327 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:23.835579 2088124 cri.go:89] found id: ""
	I1216 04:17:23.835604 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.835613 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:23.835620 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:23.835680 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:23.864585 2088124 cri.go:89] found id: ""
	I1216 04:17:23.864610 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.864618 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:23.864625 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:23.864683 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:23.892217 2088124 cri.go:89] found id: ""
	I1216 04:17:23.892294 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.892311 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:23.892322 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:23.892334 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:23.955889 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:23.947392   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.947993   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949516   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949846   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.951412   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:23.947392   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.947993   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949516   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949846   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.951412   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:23.955910 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:23.955929 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:23.983017 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:23.983064 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:24.018919 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:24.018946 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:24.076537 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:24.076578 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:26.592968 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:26.603896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:26.603971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:26.628560 2088124 cri.go:89] found id: ""
	I1216 04:17:26.628583 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.628591 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:26.628597 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:26.628663 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:26.655525 2088124 cri.go:89] found id: ""
	I1216 04:17:26.655549 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.655558 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:26.655564 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:26.655627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:26.681142 2088124 cri.go:89] found id: ""
	I1216 04:17:26.681169 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.681178 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:26.681185 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:26.681245 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:26.726046 2088124 cri.go:89] found id: ""
	I1216 04:17:26.726069 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.726078 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:26.726084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:26.726145 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:26.761483 2088124 cri.go:89] found id: ""
	I1216 04:17:26.761558 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.761570 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:26.761578 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:26.761670 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:26.804988 2088124 cri.go:89] found id: ""
	I1216 04:17:26.805062 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.805085 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:26.805104 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:26.805191 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:26.835017 2088124 cri.go:89] found id: ""
	I1216 04:17:26.835107 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.835132 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:26.835146 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:26.835222 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:26.864963 2088124 cri.go:89] found id: ""
	I1216 04:17:26.864989 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.864998 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:26.865008 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:26.865020 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:26.920931 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:26.920966 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:26.936801 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:26.936828 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:27.001379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:26.993556   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.993942   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.995579   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.996105   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.997606   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:26.993556   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.993942   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.995579   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.996105   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.997606   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:27.001453 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:27.001473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:27.029301 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:27.029338 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:29.560341 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:29.570732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:29.570810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:29.594792 2088124 cri.go:89] found id: ""
	I1216 04:17:29.594819 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.594828 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:29.594835 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:29.594900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:29.619488 2088124 cri.go:89] found id: ""
	I1216 04:17:29.619514 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.619523 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:29.619530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:29.619589 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:29.644688 2088124 cri.go:89] found id: ""
	I1216 04:17:29.644711 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.644720 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:29.644726 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:29.644792 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:29.670117 2088124 cri.go:89] found id: ""
	I1216 04:17:29.670143 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.670152 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:29.670158 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:29.670246 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:29.744231 2088124 cri.go:89] found id: ""
	I1216 04:17:29.744258 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.744267 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:29.744273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:29.744333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:29.784178 2088124 cri.go:89] found id: ""
	I1216 04:17:29.784201 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.784211 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:29.784217 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:29.784278 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:29.813318 2088124 cri.go:89] found id: ""
	I1216 04:17:29.813341 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.813349 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:29.813355 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:29.813414 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:29.841947 2088124 cri.go:89] found id: ""
	I1216 04:17:29.841973 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.841981 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:29.841991 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:29.842003 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:29.872423 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:29.872449 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:29.927890 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:29.927927 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:29.943872 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:29.943903 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:30.030211 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:30.002270   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.003334   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.011701   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.013525   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.017907   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:30.002270   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.003334   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.011701   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.013525   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.017907   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:30.030233 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:30.030247 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:32.571327 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:32.582193 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:32.582264 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:32.614548 2088124 cri.go:89] found id: ""
	I1216 04:17:32.614575 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.614584 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:32.614591 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:32.614656 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:32.639581 2088124 cri.go:89] found id: ""
	I1216 04:17:32.639609 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.639618 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:32.639624 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:32.639690 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:32.664409 2088124 cri.go:89] found id: ""
	I1216 04:17:32.664431 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.664440 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:32.664446 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:32.664540 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:32.702042 2088124 cri.go:89] found id: ""
	I1216 04:17:32.702068 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.702077 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:32.702083 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:32.702143 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:32.744945 2088124 cri.go:89] found id: ""
	I1216 04:17:32.744972 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.744981 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:32.744988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:32.745073 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:32.789635 2088124 cri.go:89] found id: ""
	I1216 04:17:32.789662 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.789671 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:32.789678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:32.789739 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:32.815679 2088124 cri.go:89] found id: ""
	I1216 04:17:32.815707 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.815717 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:32.815724 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:32.815787 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:32.841170 2088124 cri.go:89] found id: ""
	I1216 04:17:32.841195 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.841204 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:32.841213 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:32.841224 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:32.897709 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:32.897747 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:32.913830 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:32.913862 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:32.978618 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:32.969623   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.970476   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972369   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972985   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.974627   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:32.969623   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.970476   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972369   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972985   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.974627   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:32.978642 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:32.978655 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:33.004220 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:33.004272 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:35.534506 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:35.545218 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:35.545290 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:35.570921 2088124 cri.go:89] found id: ""
	I1216 04:17:35.570949 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.570958 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:35.570965 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:35.571023 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:35.596188 2088124 cri.go:89] found id: ""
	I1216 04:17:35.596216 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.596226 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:35.596232 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:35.596290 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:35.621275 2088124 cri.go:89] found id: ""
	I1216 04:17:35.621298 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.621307 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:35.621313 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:35.621373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:35.646280 2088124 cri.go:89] found id: ""
	I1216 04:17:35.646304 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.646312 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:35.646319 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:35.646380 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:35.674777 2088124 cri.go:89] found id: ""
	I1216 04:17:35.674850 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.674874 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:35.674894 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:35.674969 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:35.734693 2088124 cri.go:89] found id: ""
	I1216 04:17:35.734716 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.734725 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:35.734732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:35.734792 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:35.776099 2088124 cri.go:89] found id: ""
	I1216 04:17:35.776121 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.776129 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:35.776136 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:35.776195 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:35.809643 2088124 cri.go:89] found id: ""
	I1216 04:17:35.809720 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.809744 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:35.809765 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:35.809805 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:35.865415 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:35.865452 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:35.880891 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:35.880969 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:35.943467 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:35.935628   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.936425   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938104   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938398   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.939836   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:35.935628   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.936425   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938104   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938398   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.939836   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:35.943485 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:35.943497 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:35.968153 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:35.968187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:38.502135 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:38.512843 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:38.512915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:38.537512 2088124 cri.go:89] found id: ""
	I1216 04:17:38.537537 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.537546 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:38.537553 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:38.537618 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:38.563124 2088124 cri.go:89] found id: ""
	I1216 04:17:38.563159 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.563168 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:38.563174 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:38.563265 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:38.589894 2088124 cri.go:89] found id: ""
	I1216 04:17:38.589918 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.589927 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:38.589933 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:38.590001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:38.615078 2088124 cri.go:89] found id: ""
	I1216 04:17:38.615104 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.615114 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:38.615120 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:38.615188 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:38.640365 2088124 cri.go:89] found id: ""
	I1216 04:17:38.640397 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.640406 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:38.640416 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:38.640486 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:38.664018 2088124 cri.go:89] found id: ""
	I1216 04:17:38.664095 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.664116 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:38.664125 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:38.664194 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:38.704314 2088124 cri.go:89] found id: ""
	I1216 04:17:38.704341 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.704350 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:38.704356 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:38.704415 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:38.747321 2088124 cri.go:89] found id: ""
	I1216 04:17:38.747349 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.747357 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:38.747366 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:38.747377 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:38.778906 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:38.778937 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:38.846005 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:38.837440   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.837970   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.839654   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.840202   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.841808   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:38.837440   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.837970   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.839654   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.840202   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.841808   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:38.846026 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:38.846039 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:38.872344 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:38.872381 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:38.907009 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:38.907060 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:41.467452 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:41.478044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:41.478160 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:41.505036 2088124 cri.go:89] found id: ""
	I1216 04:17:41.505062 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.505072 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:41.505079 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:41.505163 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:41.533010 2088124 cri.go:89] found id: ""
	I1216 04:17:41.533044 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.533054 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:41.533078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:41.533160 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:41.557094 2088124 cri.go:89] found id: ""
	I1216 04:17:41.557166 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.557181 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:41.557188 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:41.557261 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:41.585674 2088124 cri.go:89] found id: ""
	I1216 04:17:41.585718 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.585727 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:41.585734 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:41.585805 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:41.610276 2088124 cri.go:89] found id: ""
	I1216 04:17:41.610311 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.610320 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:41.610327 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:41.610398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:41.636914 2088124 cri.go:89] found id: ""
	I1216 04:17:41.636981 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.637010 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:41.637025 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:41.637097 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:41.665097 2088124 cri.go:89] found id: ""
	I1216 04:17:41.665161 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.665187 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:41.665202 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:41.665279 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:41.727525 2088124 cri.go:89] found id: ""
	I1216 04:17:41.727553 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.727562 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:41.727571 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:41.727589 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:41.817873 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:41.817913 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:41.834790 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:41.834817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:41.903430 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:41.895682   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.896090   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897605   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897915   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.899505   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:41.895682   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.896090   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897605   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897915   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.899505   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:41.903453 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:41.903465 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:41.928600 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:41.928640 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:44.456049 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:44.466779 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:44.466853 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:44.493084 2088124 cri.go:89] found id: ""
	I1216 04:17:44.493110 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.493119 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:44.493126 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:44.493185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:44.517683 2088124 cri.go:89] found id: ""
	I1216 04:17:44.517717 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.517727 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:44.517734 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:44.517810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:44.541717 2088124 cri.go:89] found id: ""
	I1216 04:17:44.541749 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.541758 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:44.541764 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:44.541830 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:44.565684 2088124 cri.go:89] found id: ""
	I1216 04:17:44.565711 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.565723 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:44.565729 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:44.565796 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:44.590246 2088124 cri.go:89] found id: ""
	I1216 04:17:44.590285 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.590293 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:44.590300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:44.590372 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:44.618255 2088124 cri.go:89] found id: ""
	I1216 04:17:44.618284 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.618292 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:44.618299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:44.618367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:44.648191 2088124 cri.go:89] found id: ""
	I1216 04:17:44.648219 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.648228 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:44.648234 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:44.648295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:44.679499 2088124 cri.go:89] found id: ""
	I1216 04:17:44.679574 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.679598 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:44.679615 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:44.679640 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:44.758228 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:44.758267 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:44.779294 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:44.779331 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:44.858723 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:44.849709   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.850588   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852326   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852658   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.854144   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:44.849709   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.850588   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852326   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852658   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.854144   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:44.858749 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:44.858764 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:44.883969 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:44.884008 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:47.413411 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:47.423987 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:47.424106 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:47.449248 2088124 cri.go:89] found id: ""
	I1216 04:17:47.449314 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.449329 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:47.449336 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:47.449398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:47.475548 2088124 cri.go:89] found id: ""
	I1216 04:17:47.475578 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.475587 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:47.475593 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:47.475655 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:47.500110 2088124 cri.go:89] found id: ""
	I1216 04:17:47.500177 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.500199 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:47.500218 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:47.500306 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:47.530628 2088124 cri.go:89] found id: ""
	I1216 04:17:47.530696 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.530723 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:47.530741 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:47.530826 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:47.556437 2088124 cri.go:89] found id: ""
	I1216 04:17:47.556464 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.556473 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:47.556479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:47.556549 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:47.581048 2088124 cri.go:89] found id: ""
	I1216 04:17:47.581071 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.581081 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:47.581088 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:47.581148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:47.606560 2088124 cri.go:89] found id: ""
	I1216 04:17:47.606588 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.606596 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:47.606603 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:47.606663 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:47.640327 2088124 cri.go:89] found id: ""
	I1216 04:17:47.640352 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.640360 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:47.640370 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:47.640388 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:47.702815 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:47.702920 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:47.736710 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:47.736751 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:47.839518 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:47.830501   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.831305   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833129   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833682   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.835432   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:47.830501   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.831305   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833129   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833682   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.835432   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:47.839540 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:47.839554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:47.865722 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:47.865758 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:50.397056 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:50.409097 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:50.409241 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:50.437681 2088124 cri.go:89] found id: ""
	I1216 04:17:50.437704 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.437714 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:50.437743 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:50.437829 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:50.462756 2088124 cri.go:89] found id: ""
	I1216 04:17:50.462783 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.462791 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:50.462798 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:50.462914 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:50.487724 2088124 cri.go:89] found id: ""
	I1216 04:17:50.487751 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.487760 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:50.487767 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:50.487873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:50.513141 2088124 cri.go:89] found id: ""
	I1216 04:17:50.513208 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.513219 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:50.513237 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:50.513315 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:50.538993 2088124 cri.go:89] found id: ""
	I1216 04:17:50.539100 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.539124 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:50.539144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:50.539231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:50.564296 2088124 cri.go:89] found id: ""
	I1216 04:17:50.564319 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.564328 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:50.564335 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:50.564395 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:50.587840 2088124 cri.go:89] found id: ""
	I1216 04:17:50.587865 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.587874 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:50.587880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:50.587941 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:50.616481 2088124 cri.go:89] found id: ""
	I1216 04:17:50.616555 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.616577 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:50.616595 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:50.616611 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:50.674183 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:50.674218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:50.705566 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:50.705596 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:50.817242 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:50.808677   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.809401   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811128   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811612   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.813344   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:50.808677   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.809401   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811128   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811612   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.813344   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:50.817265 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:50.817278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:50.842758 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:50.842792 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:53.372576 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:53.383245 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:53.383313 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:53.407745 2088124 cri.go:89] found id: ""
	I1216 04:17:53.407767 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.407775 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:53.407781 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:53.407839 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:53.435170 2088124 cri.go:89] found id: ""
	I1216 04:17:53.435194 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.435203 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:53.435209 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:53.435268 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:53.461399 2088124 cri.go:89] found id: ""
	I1216 04:17:53.461426 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.461437 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:53.461443 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:53.461504 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:53.492254 2088124 cri.go:89] found id: ""
	I1216 04:17:53.492279 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.492289 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:53.492295 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:53.492356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:53.515778 2088124 cri.go:89] found id: ""
	I1216 04:17:53.515802 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.515810 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:53.515816 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:53.515875 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:53.539474 2088124 cri.go:89] found id: ""
	I1216 04:17:53.539498 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.539508 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:53.539514 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:53.539576 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:53.565164 2088124 cri.go:89] found id: ""
	I1216 04:17:53.565229 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.565255 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:53.565273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:53.565359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:53.589875 2088124 cri.go:89] found id: ""
	I1216 04:17:53.589941 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.589963 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:53.589984 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:53.590026 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:53.654018 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:53.644813   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.645597   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647434   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647966   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.649437   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:53.644813   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.645597   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647434   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647966   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.649437   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:53.654042 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:53.654059 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:53.679510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:53.679548 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:53.719485 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:53.719514 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:53.792435 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:53.792471 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:56.314262 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:56.325267 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:56.325348 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:56.350886 2088124 cri.go:89] found id: ""
	I1216 04:17:56.350908 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.350917 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:56.350923 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:56.350985 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:56.375203 2088124 cri.go:89] found id: ""
	I1216 04:17:56.375230 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.375239 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:56.375246 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:56.375305 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:56.400956 2088124 cri.go:89] found id: ""
	I1216 04:17:56.400980 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.400988 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:56.400994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:56.401055 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:56.426054 2088124 cri.go:89] found id: ""
	I1216 04:17:56.426077 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.426086 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:56.426093 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:56.426154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:56.451881 2088124 cri.go:89] found id: ""
	I1216 04:17:56.451905 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.451914 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:56.451920 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:56.452029 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:56.483163 2088124 cri.go:89] found id: ""
	I1216 04:17:56.483190 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.483199 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:56.483223 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:56.483297 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:56.509283 2088124 cri.go:89] found id: ""
	I1216 04:17:56.509307 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.509316 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:56.509321 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:56.509386 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:56.533713 2088124 cri.go:89] found id: ""
	I1216 04:17:56.533788 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.533813 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:56.533851 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:56.533883 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:56.591786 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:56.591822 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:56.608010 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:56.608041 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:56.677352 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:56.669278   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.669934   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.671527   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.672102   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.673230   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:56.669278   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.669934   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.671527   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.672102   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.673230   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:56.677375 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:56.677388 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:56.710597 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:56.710632 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:59.260233 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:59.274612 2088124 out.go:203] 
	W1216 04:17:59.277673 2088124 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 04:17:59.277728 2088124 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 04:17:59.277743 2088124 out.go:285] * Related issues:
	W1216 04:17:59.277759 2088124 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1216 04:17:59.277770 2088124 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1216 04:17:59.280576 2088124 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617588600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617668196Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617791517Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617872237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617937228Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618063593Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618133950Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618196685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618268330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618361662Z" level=info msg="Connect containerd service"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618902392Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.619957818Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.633592863Z" level=info msg="Start subscribing containerd event"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.633677981Z" level=info msg="Start recovering state"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.635272389Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.635428480Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673007758Z" level=info msg="Start event monitor"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673059047Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673070189Z" level=info msg="Start streaming server"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673079715Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673091785Z" level=info msg="runtime interface starting up..."
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673098340Z" level=info msg="starting plugins..."
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673130848Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673430452Z" level=info msg="containerd successfully booted in 0.082787s"
	Dec 16 04:11:55 newest-cni-450938 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:18:08.601606   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:08.602315   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:08.603875   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:08.604370   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:08.605934   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:18:08 up 10:00,  0 user,  load average: 0.62, 0.59, 1.06
	Linux newest-cni-450938 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:18:04 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:04 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:04 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:06 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:06 newest-cni-450938 kubelet[13609]: E1216 04:18:06.175976   13609 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:06 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:06 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:06 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 16 04:18:06 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:06 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:06 newest-cni-450938 kubelet[13648]: E1216 04:18:06.985031   13648 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:06 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:06 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:07 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 16 04:18:07 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:07 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:07 newest-cni-450938 kubelet[13667]: E1216 04:18:07.781200   13667 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:07 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:07 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:08 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 16 04:18:08 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:08 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:08 newest-cni-450938 kubelet[13743]: E1216 04:18:08.498154   13743 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:08 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:08 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (401.28296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-450938" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-450938
helpers_test.go:244: (dbg) docker inspect newest-cni-450938:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	        "Created": "2025-12-16T04:01:45.321904496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2088249,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:11:49.715157618Z",
	            "FinishedAt": "2025-12-16T04:11:48.344695153Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/hosts",
	        "LogPath": "/var/lib/docker/containers/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65/e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65-json.log",
	        "Name": "/newest-cni-450938",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-450938:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-450938",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2dde4cac2e0df598e4cefd81cf702e2b3da22a7c22337a4e5f237438b936e65",
	                "LowerDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f113d8be12db93724b818499e5c245c60602562a45102e19db7340fe27ef5afc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-450938",
	                "Source": "/var/lib/docker/volumes/newest-cni-450938/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-450938",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-450938",
	                "name.minikube.sigs.k8s.io": "newest-cni-450938",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0d040f98e420d560a9e17e89d3d7fe4b27a499b96ccdebe83fcb72878ac3aa5a",
	            "SandboxKey": "/var/run/docker/netns/0d040f98e420",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34669"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34673"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34671"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34672"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-450938": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:64:e7:5f:26:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "961937bd6f37532287f488797e74382e326ca0852d2ef3f8a1d23a546f1f7d1a",
	                    "EndpointID": "06c1897ed9171a5e6bbd198d06b0b6b16523d38b6c9e3e64ec0084e4fa9e4f3b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-450938",
	                        "e2dde4cac2e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (305.865473ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-450938 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-450938 logs -n 25: (1.580643915s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p embed-certs-092028                                                                                                                                                                                                                                      │ embed-certs-092028           │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ delete  │ -p disable-driver-mounts-650877                                                                                                                                                                                                                            │ disable-driver-mounts-650877 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 03:58 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 03:58 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ stop    │ -p default-k8s-diff-port-862404 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:00 UTC │
	│ start   │ -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:00 UTC │ 16 Dec 25 04:01 UTC │
	│ image   │ default-k8s-diff-port-862404 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ pause   │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ unpause │ -p default-k8s-diff-port-862404 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ delete  │ -p default-k8s-diff-port-862404                                                                                                                                                                                                                            │ default-k8s-diff-port-862404 │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │ 16 Dec 25 04:01 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-255023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:02 UTC │                     │
	│ stop    │ -p no-preload-255023 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ addons  │ enable dashboard -p no-preload-255023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │ 16 Dec 25 04:04 UTC │
	│ start   │ -p no-preload-255023 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-255023            │ jenkins │ v1.37.0 │ 16 Dec 25 04:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-450938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	│ stop    │ -p newest-cni-450938 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ addons  │ enable dashboard -p newest-cni-450938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ start   │ -p newest-cni-450938 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ image   │ newest-cni-450938 image list --format=json                                                                                                                                                                                                                 │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ pause   │ -p newest-cni-450938 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	│ unpause │ -p newest-cni-450938 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-450938            │ jenkins │ v1.37.0 │ 16 Dec 25 04:18 UTC │ 16 Dec 25 04:18 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:11:49
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:11:49.443609 2088124 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:11:49.443766 2088124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:49.443791 2088124 out.go:374] Setting ErrFile to fd 2...
	I1216 04:11:49.443797 2088124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:49.444086 2088124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:11:49.444552 2088124 out.go:368] Setting JSON to false
	I1216 04:11:49.445491 2088124 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35654,"bootTime":1765822656,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:11:49.445560 2088124 start.go:143] virtualization:  
	I1216 04:11:49.450767 2088124 out.go:179] * [newest-cni-450938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:11:49.453684 2088124 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:11:49.453830 2088124 notify.go:221] Checking for updates...
	I1216 04:11:49.459490 2088124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:11:49.462425 2088124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:49.465199 2088124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:11:49.468049 2088124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:11:49.470926 2088124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:11:49.474323 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:49.474898 2088124 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:11:49.507547 2088124 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:11:49.507675 2088124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:49.559588 2088124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:11:49.550344871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:49.559694 2088124 docker.go:319] overlay module found
	I1216 04:11:49.564661 2088124 out.go:179] * Using the docker driver based on existing profile
	I1216 04:11:49.567577 2088124 start.go:309] selected driver: docker
	I1216 04:11:49.567592 2088124 start.go:927] validating driver "docker" against &{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:49.567688 2088124 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:11:49.568412 2088124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:49.630893 2088124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:11:49.62154899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:49.631269 2088124 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 04:11:49.631299 2088124 cni.go:84] Creating CNI manager for ""
	I1216 04:11:49.631354 2088124 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:11:49.631398 2088124 start.go:353] cluster config:
	{Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:49.634471 2088124 out.go:179] * Starting "newest-cni-450938" primary control-plane node in "newest-cni-450938" cluster
	I1216 04:11:49.637273 2088124 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:11:49.640282 2088124 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:11:49.643072 2088124 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:11:49.643109 2088124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:11:49.643124 2088124 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 04:11:49.643134 2088124 cache.go:65] Caching tarball of preloaded images
	I1216 04:11:49.643213 2088124 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:11:49.643223 2088124 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1216 04:11:49.643349 2088124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:11:49.663232 2088124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:11:49.663256 2088124 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:11:49.663277 2088124 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:11:49.663307 2088124 start.go:360] acquireMachinesLock for newest-cni-450938: {Name:mk874c56eb171e87c93def72ccf1175c51c96e33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:11:49.663368 2088124 start.go:364] duration metric: took 37.825µs to acquireMachinesLock for "newest-cni-450938"
	I1216 04:11:49.663390 2088124 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:11:49.663398 2088124 fix.go:54] fixHost starting: 
	I1216 04:11:49.663657 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:49.680807 2088124 fix.go:112] recreateIfNeeded on newest-cni-450938: state=Stopped err=<nil>
	W1216 04:11:49.680842 2088124 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:11:49.684150 2088124 out.go:252] * Restarting existing docker container for "newest-cni-450938" ...
	I1216 04:11:49.684240 2088124 cli_runner.go:164] Run: docker start newest-cni-450938
	I1216 04:11:49.955342 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:49.981840 2088124 kic.go:430] container "newest-cni-450938" state is running.
	I1216 04:11:49.982211 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:50.021278 2088124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/config.json ...
	I1216 04:11:50.021527 2088124 machine.go:94] provisionDockerMachine start ...
	I1216 04:11:50.021596 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:50.049595 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:50.050060 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:50.050075 2088124 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:11:50.050748 2088124 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 04:11:53.188290 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:11:53.188358 2088124 ubuntu.go:182] provisioning hostname "newest-cni-450938"
	I1216 04:11:53.188485 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:53.208640 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:53.208973 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:53.208992 2088124 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-450938 && echo "newest-cni-450938" | sudo tee /etc/hostname
	I1216 04:11:53.354850 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-450938
	
	I1216 04:11:53.354932 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:53.373349 2088124 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:53.373653 2088124 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34669 <nil> <nil>}
	I1216 04:11:53.373677 2088124 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450938/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:11:53.507317 2088124 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:11:53.507346 2088124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:11:53.507369 2088124 ubuntu.go:190] setting up certificates
	I1216 04:11:53.507379 2088124 provision.go:84] configureAuth start
	I1216 04:11:53.507463 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:53.525162 2088124 provision.go:143] copyHostCerts
	I1216 04:11:53.525241 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:11:53.525251 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:11:53.525327 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:11:53.525423 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:11:53.525428 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:11:53.525453 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:11:53.525509 2088124 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:11:53.525514 2088124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:11:53.525536 2088124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:11:53.525580 2088124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450938 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-450938]
	I1216 04:11:54.045695 2088124 provision.go:177] copyRemoteCerts
	I1216 04:11:54.045768 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:11:54.045810 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.066867 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.167270 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:11:54.185959 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:11:54.204990 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:11:54.223347 2088124 provision.go:87] duration metric: took 715.940901ms to configureAuth
	I1216 04:11:54.223373 2088124 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:11:54.223571 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:54.223579 2088124 machine.go:97] duration metric: took 4.202043696s to provisionDockerMachine
	I1216 04:11:54.223586 2088124 start.go:293] postStartSetup for "newest-cni-450938" (driver="docker")
	I1216 04:11:54.223597 2088124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:11:54.223657 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:11:54.223694 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.241386 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.339071 2088124 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:11:54.342372 2088124 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:11:54.342404 2088124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:11:54.342417 2088124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:11:54.342476 2088124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:11:54.342569 2088124 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:11:54.342679 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:11:54.350184 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:11:54.367994 2088124 start.go:296] duration metric: took 144.392831ms for postStartSetup
	I1216 04:11:54.368092 2088124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:11:54.368136 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.385560 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.484799 2088124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:11:54.491513 2088124 fix.go:56] duration metric: took 4.828106411s for fixHost
	I1216 04:11:54.491541 2088124 start.go:83] releasing machines lock for "newest-cni-450938", held for 4.82816163s
	I1216 04:11:54.491612 2088124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450938
	I1216 04:11:54.509094 2088124 ssh_runner.go:195] Run: cat /version.json
	I1216 04:11:54.509138 2088124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:11:54.509150 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.509206 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:54.527383 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.529259 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:54.622646 2088124 ssh_runner.go:195] Run: systemctl --version
	I1216 04:11:54.714029 2088124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:11:54.718486 2088124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:11:54.718568 2088124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:11:54.726541 2088124 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:11:54.726568 2088124 start.go:496] detecting cgroup driver to use...
	I1216 04:11:54.726632 2088124 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:11:54.726714 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:11:54.745031 2088124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:11:54.758297 2088124 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:11:54.758370 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:11:54.774348 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:11:54.787565 2088124 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:11:54.906330 2088124 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:11:55.031458 2088124 docker.go:234] disabling docker service ...
	I1216 04:11:55.031602 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:11:55.047495 2088124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:11:55.061071 2088124 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:11:55.176474 2088124 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:11:55.308037 2088124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:11:55.321108 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:11:55.335545 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:11:55.344904 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:11:55.354341 2088124 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:11:55.354432 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:11:55.364241 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:11:55.373363 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:11:55.382311 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:11:55.391427 2088124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:11:55.399573 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:11:55.408617 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:11:55.417842 2088124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:11:55.427155 2088124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:11:55.435028 2088124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:11:55.442465 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:55.555794 2088124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:11:55.675355 2088124 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:11:55.675506 2088124 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:11:55.679491 2088124 start.go:564] Will wait 60s for crictl version
	I1216 04:11:55.679606 2088124 ssh_runner.go:195] Run: which crictl
	I1216 04:11:55.683263 2088124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:11:55.706762 2088124 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:11:55.706911 2088124 ssh_runner.go:195] Run: containerd --version
	I1216 04:11:55.726295 2088124 ssh_runner.go:195] Run: containerd --version
	I1216 04:11:55.754045 2088124 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1216 04:11:55.757209 2088124 cli_runner.go:164] Run: docker network inspect newest-cni-450938 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:11:55.773141 2088124 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:11:55.777028 2088124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:55.790127 2088124 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 04:11:55.792976 2088124 kubeadm.go:884] updating cluster {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:11:55.793134 2088124 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 04:11:55.793224 2088124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:55.820865 2088124 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:11:55.820893 2088124 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:11:55.820953 2088124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:55.848708 2088124 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:11:55.848733 2088124 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:11:55.848741 2088124 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1216 04:11:55.848865 2088124 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-450938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:11:55.848944 2088124 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:11:55.877782 2088124 cni.go:84] Creating CNI manager for ""
	I1216 04:11:55.877809 2088124 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 04:11:55.877833 2088124 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 04:11:55.877856 2088124 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450938 NodeName:newest-cni-450938 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:11:55.877980 2088124 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-450938"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:11:55.878053 2088124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:11:55.886063 2088124 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:11:55.886135 2088124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:11:55.893994 2088124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 04:11:55.906976 2088124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:11:55.921636 2088124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1216 04:11:55.935475 2088124 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:11:55.940181 2088124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:55.958241 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:56.086097 2088124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:56.102803 2088124 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938 for IP: 192.168.76.2
	I1216 04:11:56.102828 2088124 certs.go:195] generating shared ca certs ...
	I1216 04:11:56.102856 2088124 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.103007 2088124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:11:56.103163 2088124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:11:56.103175 2088124 certs.go:257] generating profile certs ...
	I1216 04:11:56.103292 2088124 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/client.key
	I1216 04:11:56.103376 2088124 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key.d224429c
	I1216 04:11:56.103427 2088124 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key
	I1216 04:11:56.103545 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:11:56.103587 2088124 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:11:56.103600 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:11:56.103627 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:11:56.103658 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:11:56.103686 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:11:56.103735 2088124 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:11:56.104338 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:11:56.126254 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:11:56.147493 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:11:56.167667 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:11:56.186450 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:11:56.204453 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:11:56.222875 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:11:56.240385 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/newest-cni-450938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:11:56.257955 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:11:56.276171 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:11:56.293848 2088124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:11:56.311719 2088124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:11:56.324807 2088124 ssh_runner.go:195] Run: openssl version
	I1216 04:11:56.331262 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.338764 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:11:56.346054 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.349987 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.350052 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:11:56.391179 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:11:56.398825 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.406218 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:11:56.413696 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.417638 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.417705 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:11:56.459490 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:11:56.466920 2088124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.474252 2088124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:11:56.481440 2088124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.485119 2088124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.485259 2088124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:56.526344 2088124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:11:56.533907 2088124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:11:56.537774 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:11:56.578487 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:11:56.619729 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:11:56.660999 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:11:56.702232 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:11:56.744306 2088124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:11:56.785680 2088124 kubeadm.go:401] StartCluster: {Name:newest-cni-450938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-450938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:56.785803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:11:56.785870 2088124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:11:56.816785 2088124 cri.go:89] found id: ""
	I1216 04:11:56.816890 2088124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:11:56.824683 2088124 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:11:56.824744 2088124 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:11:56.824813 2088124 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:11:56.832253 2088124 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:11:56.832838 2088124 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-450938" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:56.833086 2088124 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-1796512/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-450938" cluster setting kubeconfig missing "newest-cni-450938" context setting]
	I1216 04:11:56.833830 2088124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.835841 2088124 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:11:56.846568 2088124 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 04:11:56.846607 2088124 kubeadm.go:602] duration metric: took 21.839206ms to restartPrimaryControlPlane
	I1216 04:11:56.846659 2088124 kubeadm.go:403] duration metric: took 60.947212ms to StartCluster
	I1216 04:11:56.846683 2088124 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.846774 2088124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:11:56.847954 2088124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:56.848288 2088124 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:11:56.848543 2088124 config.go:182] Loaded profile config "newest-cni-450938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:11:56.848590 2088124 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:11:56.848653 2088124 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-450938"
	I1216 04:11:56.848667 2088124 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-450938"
	I1216 04:11:56.848690 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.849140 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.849662 2088124 addons.go:70] Setting dashboard=true in profile "newest-cni-450938"
	I1216 04:11:56.849685 2088124 addons.go:239] Setting addon dashboard=true in "newest-cni-450938"
	W1216 04:11:56.849692 2088124 addons.go:248] addon dashboard should already be in state true
	I1216 04:11:56.849725 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.850155 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.851797 2088124 addons.go:70] Setting default-storageclass=true in profile "newest-cni-450938"
	I1216 04:11:56.851835 2088124 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-450938"
	I1216 04:11:56.852230 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.854311 2088124 out.go:179] * Verifying Kubernetes components...
	I1216 04:11:56.857550 2088124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:56.877736 2088124 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 04:11:56.883198 2088124 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 04:11:56.888994 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 04:11:56.889023 2088124 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 04:11:56.889099 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.905463 2088124 addons.go:239] Setting addon default-storageclass=true in "newest-cni-450938"
	I1216 04:11:56.905510 2088124 host.go:66] Checking if "newest-cni-450938" exists ...
	I1216 04:11:56.905917 2088124 cli_runner.go:164] Run: docker container inspect newest-cni-450938 --format={{.State.Status}}
	I1216 04:11:56.906132 2088124 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:11:56.909026 2088124 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:56.909049 2088124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:11:56.909124 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.939233 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:56.960779 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:56.969260 2088124 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:56.969285 2088124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:11:56.969344 2088124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450938
	I1216 04:11:56.994990 2088124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/newest-cni-450938/id_rsa Username:docker}
	I1216 04:11:57.096083 2088124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:57.153660 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:57.154691 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 04:11:57.154741 2088124 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 04:11:57.179948 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:57.181646 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 04:11:57.181698 2088124 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 04:11:57.220157 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 04:11:57.220192 2088124 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 04:11:57.270420 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 04:11:57.270450 2088124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 04:11:57.289844 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 04:11:57.289925 2088124 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 04:11:57.304564 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 04:11:57.304589 2088124 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 04:11:57.318199 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 04:11:57.318268 2088124 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 04:11:57.331721 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 04:11:57.331747 2088124 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 04:11:57.344689 2088124 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:11:57.344766 2088124 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 04:11:57.358118 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:57.937381 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937417 2088124 retry.go:31] will retry after 269.480362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:57.937480 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937486 2088124 retry.go:31] will retry after 229.28952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:57.937664 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937674 2088124 retry.go:31] will retry after 277.329171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:57.937800 2088124 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:11:57.937903 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:58.167607 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:58.207320 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:58.215928 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:58.286306 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.286392 2088124 retry.go:31] will retry after 251.551644ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:58.336689 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.336775 2088124 retry.go:31] will retry after 297.618581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:11:58.344615 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.344703 2088124 retry.go:31] will retry after 371.748045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.438848 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:58.538550 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:11:58.607193 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.607227 2088124 retry.go:31] will retry after 295.364456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.635597 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:11:58.705620 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.705655 2088124 retry.go:31] will retry after 548.313742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.716963 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:58.791977 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.792012 2088124 retry.go:31] will retry after 352.878163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.903095 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:58.938720 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:11:58.980189 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:58.980231 2088124 retry.go:31] will retry after 538.903986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.145753 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:11:59.214092 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.214141 2088124 retry.go:31] will retry after 822.609154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.254394 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:11:59.315668 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.315705 2088124 retry.go:31] will retry after 808.232785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.439021 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:11:59.520292 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:11:59.580253 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.580290 2088124 retry.go:31] will retry after 1.339162464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:11:59.938854 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:00.037859 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:12:00.126588 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:00.271287 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.271330 2088124 retry.go:31] will retry after 1.560463337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:12:00.271395 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.271405 2088124 retry.go:31] will retry after 965.630874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:00.439512 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:00.919713 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:00.938198 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:01.016821 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.016853 2088124 retry.go:31] will retry after 2.723457612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.238128 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:01.299810 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.299846 2088124 retry.go:31] will retry after 1.407497229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.438022 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:01.832831 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:01.895982 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.896019 2088124 retry.go:31] will retry after 1.861173275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:01.938295 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:02.438804 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:02.708270 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:02.778471 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:02.778510 2088124 retry.go:31] will retry after 3.48676176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:02.938901 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:03.438053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:03.740586 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:03.758141 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:03.823512 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.823549 2088124 retry.go:31] will retry after 3.513983603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:12:03.840241 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.840277 2088124 retry.go:31] will retry after 3.549700703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:03.938636 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:04.438975 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:04.938051 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:05.438813 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:05.938079 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:06.265883 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:06.326297 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:06.326330 2088124 retry.go:31] will retry after 5.907729831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:06.438566 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:06.938552 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:07.337994 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:07.390520 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:07.400091 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.400119 2088124 retry.go:31] will retry after 4.07949146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.438412 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:07.458870 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.458913 2088124 retry.go:31] will retry after 5.738742007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:07.938058 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:08.438048 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:08.938086 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:09.438088 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:09.938071 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:10.438982 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:10.938817 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:11.438560 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:11.480608 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:11.544274 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:11.544311 2088124 retry.go:31] will retry after 7.489839912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:11.938962 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:12.234793 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:12.294760 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:12.294795 2088124 retry.go:31] will retry after 8.284230916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:12.438042 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:12.938369 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:13.198743 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:13.273972 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:13.274008 2088124 retry.go:31] will retry after 8.727161897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:13.438137 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:13.938122 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:14.438053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:14.938105 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:15.438117 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:15.938675 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:16.438275 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:16.938051 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:17.438977 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:17.938090 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:18.438139 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:18.938875 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.034608 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:19.095129 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:19.095161 2088124 retry.go:31] will retry after 13.285449955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:19.438765 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.938027 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:20.438947 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:20.579839 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:20.651187 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:20.651287 2088124 retry.go:31] will retry after 8.595963064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:20.938552 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:21.438919 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:21.938886 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:22.001902 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:22.069854 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:22.069889 2088124 retry.go:31] will retry after 9.875475964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:22.438071 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:22.938057 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:23.438759 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:23.938093 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:24.438012 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:24.938036 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:25.438056 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:25.938060 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:26.438683 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:26.938545 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:27.438839 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:27.938076 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:28.438528 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:28.938942 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:29.247522 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:12:29.319498 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:29.319530 2088124 retry.go:31] will retry after 11.610992075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:29.438808 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:29.938634 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:30.438853 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:30.938004 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.438055 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.939022 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:31.945765 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:32.028672 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.028710 2088124 retry.go:31] will retry after 8.660108846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.380884 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:12:32.438451 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:32.451845 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.451878 2088124 retry.go:31] will retry after 20.587741489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:32.939020 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:33.438637 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:33.939026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:34.438183 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:34.938889 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:35.438058 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:35.938079 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:36.438040 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:36.938449 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:37.438932 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:37.938711 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:38.438609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:38.938102 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:39.438039 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:39.938131 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:40.438724 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:40.689879 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:12:40.758598 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:40.758633 2088124 retry.go:31] will retry after 22.619838961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:40.931114 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:12:40.938807 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 04:12:41.022703 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:41.022737 2088124 retry.go:31] will retry after 26.329717671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:41.438070 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:41.938106 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:42.438073 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:42.938708 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:43.438842 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:43.938877 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:44.438603 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:44.938026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:45.438387 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:45.938042 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:46.438913 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:46.938061 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:47.438724 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:47.938106 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:48.438105 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:48.938608 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:49.438052 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:49.938137 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:50.438126 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:50.938158 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:51.438047 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:51.938036 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:52.437993 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:52.938585 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:53.040311 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 04:12:53.100279 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:53.100315 2088124 retry.go:31] will retry after 25.050501438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:12:53.438735 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:53.938047 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:54.438981 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:54.938826 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:55.438076 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:55.938982 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:56.438082 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:56.938775 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:12:56.938878 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:12:56.965240 2088124 cri.go:89] found id: ""
	I1216 04:12:56.965267 2088124 logs.go:282] 0 containers: []
	W1216 04:12:56.965275 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:12:56.965282 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:12:56.965342 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:12:56.994326 2088124 cri.go:89] found id: ""
	I1216 04:12:56.994352 2088124 logs.go:282] 0 containers: []
	W1216 04:12:56.994361 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:12:56.994368 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:12:56.994428 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:12:57.023992 2088124 cri.go:89] found id: ""
	I1216 04:12:57.024019 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.024028 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:12:57.024034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:12:57.024096 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:12:57.048533 2088124 cri.go:89] found id: ""
	I1216 04:12:57.048557 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.048564 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:12:57.048571 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:12:57.048633 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:12:57.073452 2088124 cri.go:89] found id: ""
	I1216 04:12:57.073477 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.073489 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:12:57.073495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:12:57.073556 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:12:57.098320 2088124 cri.go:89] found id: ""
	I1216 04:12:57.098343 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.098351 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:12:57.098358 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:12:57.098422 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:12:57.122156 2088124 cri.go:89] found id: ""
	I1216 04:12:57.122178 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.122186 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:12:57.122192 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:12:57.122253 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:12:57.146348 2088124 cri.go:89] found id: ""
	I1216 04:12:57.146371 2088124 logs.go:282] 0 containers: []
	W1216 04:12:57.146379 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:12:57.146389 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:12:57.146400 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:12:57.204504 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:12:57.204554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:12:57.222444 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:12:57.222477 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:12:57.295723 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:12:57.287802    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.288197    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.289767    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.290151    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.291759    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:12:57.287802    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.288197    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.289767    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.290151    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:12:57.291759    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:12:57.295745 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:12:57.295758 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:12:57.320926 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:12:57.320959 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:12:59.851668 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:59.862238 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:12:59.862307 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:12:59.886311 2088124 cri.go:89] found id: ""
	I1216 04:12:59.886338 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.886346 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:12:59.886353 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:12:59.886412 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:12:59.910403 2088124 cri.go:89] found id: ""
	I1216 04:12:59.910426 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.910434 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:12:59.910440 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:12:59.910498 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:12:59.935230 2088124 cri.go:89] found id: ""
	I1216 04:12:59.935253 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.935262 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:12:59.935268 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:12:59.935329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:12:59.958999 2088124 cri.go:89] found id: ""
	I1216 04:12:59.959022 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.959030 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:12:59.959037 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:12:59.959113 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:12:59.984633 2088124 cri.go:89] found id: ""
	I1216 04:12:59.984655 2088124 logs.go:282] 0 containers: []
	W1216 04:12:59.984663 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:12:59.984670 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:12:59.984729 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:00.052821 2088124 cri.go:89] found id: ""
	I1216 04:13:00.052848 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.052857 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:00.052865 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:00.052942 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:00.179259 2088124 cri.go:89] found id: ""
	I1216 04:13:00.179286 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.179295 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:00.179301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:00.179374 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:00.301818 2088124 cri.go:89] found id: ""
	I1216 04:13:00.301845 2088124 logs.go:282] 0 containers: []
	W1216 04:13:00.301854 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:00.301865 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:00.301877 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:00.370430 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:00.370474 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:00.387961 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:00.387994 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:00.469934 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:00.460570    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.462248    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.463914    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.464266    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.465844    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:00.460570    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.462248    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.463914    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.464266    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:00.465844    1961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:00.470008 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:00.470035 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:00.497033 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:00.497108 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:03.031116 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:03.042155 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:03.042231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:03.067263 2088124 cri.go:89] found id: ""
	I1216 04:13:03.067286 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.067294 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:03.067300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:03.067359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:03.092385 2088124 cri.go:89] found id: ""
	I1216 04:13:03.092411 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.092421 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:03.092434 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:03.092500 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:03.121839 2088124 cri.go:89] found id: ""
	I1216 04:13:03.121866 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.121874 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:03.121881 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:03.121939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:03.145563 2088124 cri.go:89] found id: ""
	I1216 04:13:03.145591 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.145600 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:03.145606 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:03.145674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:03.173280 2088124 cri.go:89] found id: ""
	I1216 04:13:03.173308 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.173317 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:03.173324 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:03.173387 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:03.198437 2088124 cri.go:89] found id: ""
	I1216 04:13:03.198464 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.198472 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:03.198479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:03.198539 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:03.223390 2088124 cri.go:89] found id: ""
	I1216 04:13:03.223417 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.223426 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:03.223433 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:03.223492 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:03.247999 2088124 cri.go:89] found id: ""
	I1216 04:13:03.248027 2088124 logs.go:282] 0 containers: []
	W1216 04:13:03.248037 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:03.248046 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:03.248058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:03.273012 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:03.273045 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:03.309023 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:03.309054 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:03.365917 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:03.365958 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:03.379538 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 04:13:03.383127 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:03.383196 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:03.513399 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:03.513433 2088124 retry.go:31] will retry after 36.39416212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:03.513601 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:03.473858    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.483329    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.484155    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.485970    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.486663    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:03.473858    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.483329    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.484155    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.485970    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:03.486663    2088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:06.013933 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:06.025509 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:06.025592 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:06.052215 2088124 cri.go:89] found id: ""
	I1216 04:13:06.052240 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.052251 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:06.052258 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:06.052322 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:06.079261 2088124 cri.go:89] found id: ""
	I1216 04:13:06.079294 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.079303 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:06.079309 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:06.079373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:06.105297 2088124 cri.go:89] found id: ""
	I1216 04:13:06.105320 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.105329 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:06.105335 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:06.105394 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:06.134648 2088124 cri.go:89] found id: ""
	I1216 04:13:06.134671 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.134679 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:06.134685 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:06.134753 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:06.159604 2088124 cri.go:89] found id: ""
	I1216 04:13:06.159627 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.159635 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:06.159641 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:06.159705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:06.189283 2088124 cri.go:89] found id: ""
	I1216 04:13:06.189307 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.189315 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:06.189322 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:06.189431 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:06.214435 2088124 cri.go:89] found id: ""
	I1216 04:13:06.214469 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.214479 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:06.214486 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:06.214553 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:06.240374 2088124 cri.go:89] found id: ""
	I1216 04:13:06.240399 2088124 logs.go:282] 0 containers: []
	W1216 04:13:06.240407 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:06.240417 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:06.240465 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:06.297779 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:06.297828 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:06.314788 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:06.314817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:06.383844 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:06.374547    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.375415    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377219    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377642    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.379268    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:06.374547    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.375415    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377219    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.377642    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:06.379268    2196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:06.383863 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:06.383876 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:06.409175 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:06.409211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:07.353255 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:13:07.417109 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:07.417143 2088124 retry.go:31] will retry after 43.71748827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:13:08.979175 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:08.990018 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:08.990104 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:09.017028 2088124 cri.go:89] found id: ""
	I1216 04:13:09.017051 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.017060 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:09.017066 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:09.017126 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:09.042381 2088124 cri.go:89] found id: ""
	I1216 04:13:09.042404 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.042413 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:09.042419 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:09.042477 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:09.071646 2088124 cri.go:89] found id: ""
	I1216 04:13:09.071670 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.071679 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:09.071685 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:09.071744 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:09.100697 2088124 cri.go:89] found id: ""
	I1216 04:13:09.100722 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.100730 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:09.100737 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:09.100797 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:09.129662 2088124 cri.go:89] found id: ""
	I1216 04:13:09.129695 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.129704 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:09.129710 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:09.129780 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:09.156770 2088124 cri.go:89] found id: ""
	I1216 04:13:09.156794 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.156802 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:09.156809 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:09.156869 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:09.182436 2088124 cri.go:89] found id: ""
	I1216 04:13:09.182458 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.182466 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:09.182472 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:09.182531 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:09.206146 2088124 cri.go:89] found id: ""
	I1216 04:13:09.206170 2088124 logs.go:282] 0 containers: []
	W1216 04:13:09.206177 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:09.206186 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:09.206198 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:09.231510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:09.231544 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:09.260226 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:09.260256 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:09.316036 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:09.316074 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:09.332123 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:09.332153 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:09.399253 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:09.390164    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.390761    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392299    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392750    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.394603    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:09.390164    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.390761    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392299    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.392750    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:09.394603    2331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:11.899540 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:11.910018 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:11.910090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:11.938505 2088124 cri.go:89] found id: ""
	I1216 04:13:11.938532 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.938541 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:11.938549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:11.938611 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:11.962625 2088124 cri.go:89] found id: ""
	I1216 04:13:11.962654 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.962663 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:11.962681 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:11.962753 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:11.987471 2088124 cri.go:89] found id: ""
	I1216 04:13:11.987497 2088124 logs.go:282] 0 containers: []
	W1216 04:13:11.987506 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:11.987512 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:11.987578 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:12.016864 2088124 cri.go:89] found id: ""
	I1216 04:13:12.016892 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.016900 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:12.016907 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:12.016971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:12.042061 2088124 cri.go:89] found id: ""
	I1216 04:13:12.042088 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.042096 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:12.042102 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:12.042163 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:12.071427 2088124 cri.go:89] found id: ""
	I1216 04:13:12.071455 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.071464 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:12.071471 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:12.071533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:12.096407 2088124 cri.go:89] found id: ""
	I1216 04:13:12.096454 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.096463 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:12.096470 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:12.096529 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:12.120925 2088124 cri.go:89] found id: ""
	I1216 04:13:12.120952 2088124 logs.go:282] 0 containers: []
	W1216 04:13:12.120961 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:12.120970 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:12.120981 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:12.187317 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:12.178799    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.179379    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181098    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181645    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.183376    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:12.178799    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.179379    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181098    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.181645    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:12.183376    2425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:12.187390 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:12.187411 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:12.212126 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:12.212162 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:12.243105 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:12.243134 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:12.300571 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:12.300619 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:14.817445 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:14.827746 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:14.827821 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:14.857336 2088124 cri.go:89] found id: ""
	I1216 04:13:14.857363 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.857372 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:14.857379 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:14.857446 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:14.882109 2088124 cri.go:89] found id: ""
	I1216 04:13:14.882137 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.882146 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:14.882152 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:14.882211 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:14.914132 2088124 cri.go:89] found id: ""
	I1216 04:13:14.914161 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.914171 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:14.914178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:14.914239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:14.939185 2088124 cri.go:89] found id: ""
	I1216 04:13:14.939214 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.939223 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:14.939230 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:14.939297 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:14.963568 2088124 cri.go:89] found id: ""
	I1216 04:13:14.963595 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.963604 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:14.963630 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:14.963702 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:14.988853 2088124 cri.go:89] found id: ""
	I1216 04:13:14.988880 2088124 logs.go:282] 0 containers: []
	W1216 04:13:14.988889 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:14.988895 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:14.988957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:15.018658 2088124 cri.go:89] found id: ""
	I1216 04:13:15.018685 2088124 logs.go:282] 0 containers: []
	W1216 04:13:15.018694 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:15.018701 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:15.018780 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:15.052902 2088124 cri.go:89] found id: ""
	I1216 04:13:15.052926 2088124 logs.go:282] 0 containers: []
	W1216 04:13:15.052935 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:15.052945 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:15.052956 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:15.110239 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:15.110275 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:15.126429 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:15.126498 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:15.193844 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:15.184934    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.185663    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.187427    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.188101    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.189782    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:15.184934    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.185663    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.187427    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.188101    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:15.189782    2542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:15.193874 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:15.193889 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:15.219891 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:15.219925 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:17.752258 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:17.763106 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:17.763180 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:17.789059 2088124 cri.go:89] found id: ""
	I1216 04:13:17.789084 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.789093 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:17.789099 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:17.789158 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:17.817534 2088124 cri.go:89] found id: ""
	I1216 04:13:17.817560 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.817569 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:17.817576 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:17.817637 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:17.843134 2088124 cri.go:89] found id: ""
	I1216 04:13:17.843160 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.843169 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:17.843175 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:17.843240 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:17.868379 2088124 cri.go:89] found id: ""
	I1216 04:13:17.868404 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.868414 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:17.868421 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:17.868490 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:17.893356 2088124 cri.go:89] found id: ""
	I1216 04:13:17.893384 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.893393 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:17.893400 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:17.893463 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:17.921808 2088124 cri.go:89] found id: ""
	I1216 04:13:17.921851 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.921860 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:17.921867 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:17.921928 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:17.947257 2088124 cri.go:89] found id: ""
	I1216 04:13:17.947284 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.947293 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:17.947300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:17.947367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:17.975318 2088124 cri.go:89] found id: ""
	I1216 04:13:17.975345 2088124 logs.go:282] 0 containers: []
	W1216 04:13:17.975354 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:17.975364 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:17.975375 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:18.051655 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:18.042648    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.043445    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045243    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045767    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.047547    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:18.042648    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.043445    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045243    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.045767    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:18.047547    2653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:18.051680 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:18.051693 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:18.078685 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:18.078723 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:18.107761 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:18.107792 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:18.151402 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:13:18.166502 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:18.166585 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 04:13:18.219917 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:18.220071 2088124 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:20.720560 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:20.734518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:20.734605 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:20.771344 2088124 cri.go:89] found id: ""
	I1216 04:13:20.771418 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.771435 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:20.771442 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:20.771517 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:20.801470 2088124 cri.go:89] found id: ""
	I1216 04:13:20.801496 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.801505 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:20.801511 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:20.801591 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:20.826547 2088124 cri.go:89] found id: ""
	I1216 04:13:20.826620 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.826644 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:20.826663 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:20.826747 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:20.852855 2088124 cri.go:89] found id: ""
	I1216 04:13:20.852881 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.852891 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:20.852898 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:20.852986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:20.878623 2088124 cri.go:89] found id: ""
	I1216 04:13:20.878659 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.878668 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:20.878692 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:20.878808 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:20.902864 2088124 cri.go:89] found id: ""
	I1216 04:13:20.902938 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.902964 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:20.902984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:20.903181 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:20.932453 2088124 cri.go:89] found id: ""
	I1216 04:13:20.932480 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.932488 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:20.932495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:20.932552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:20.961972 2088124 cri.go:89] found id: ""
	I1216 04:13:20.962003 2088124 logs.go:282] 0 containers: []
	W1216 04:13:20.962012 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:20.962021 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:20.962046 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:21.031620 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:21.021920    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.023404    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.024377    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.025233    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.026911    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:21.021920    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.023404    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.024377    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.025233    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:21.026911    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:21.031656 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:21.031669 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:21.057107 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:21.057141 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:21.084165 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:21.084195 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:21.144652 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:21.144688 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:23.662474 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:23.672891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:23.672972 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:23.728294 2088124 cri.go:89] found id: ""
	I1216 04:13:23.728317 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.728325 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:23.728332 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:23.728390 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:23.774385 2088124 cri.go:89] found id: ""
	I1216 04:13:23.774414 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.774423 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:23.774429 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:23.774496 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:23.804506 2088124 cri.go:89] found id: ""
	I1216 04:13:23.804531 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.804553 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:23.804560 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:23.804618 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:23.831638 2088124 cri.go:89] found id: ""
	I1216 04:13:23.831674 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.831683 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:23.831689 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:23.831766 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:23.856129 2088124 cri.go:89] found id: ""
	I1216 04:13:23.856155 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.856164 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:23.856172 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:23.856251 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:23.884761 2088124 cri.go:89] found id: ""
	I1216 04:13:23.884787 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.884796 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:23.884803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:23.884905 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:23.913711 2088124 cri.go:89] found id: ""
	I1216 04:13:23.913736 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.913745 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:23.913752 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:23.913810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:23.938590 2088124 cri.go:89] found id: ""
	I1216 04:13:23.938616 2088124 logs.go:282] 0 containers: []
	W1216 04:13:23.938625 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:23.938635 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:23.938646 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:23.993972 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:23.994007 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:24.012474 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:24.012506 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:24.080748 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:24.071242    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.071643    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.073210    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.074640    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.075561    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:24.071242    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.071643    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.073210    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.074640    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:24.075561    2888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:24.080778 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:24.080791 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:24.110317 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:24.110357 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:26.644643 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:26.655360 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:26.655430 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:26.679082 2088124 cri.go:89] found id: ""
	I1216 04:13:26.679108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.679117 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:26.679124 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:26.679184 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:26.727361 2088124 cri.go:89] found id: ""
	I1216 04:13:26.727389 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.727399 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:26.727405 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:26.727466 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:26.784659 2088124 cri.go:89] found id: ""
	I1216 04:13:26.784688 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.784697 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:26.784703 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:26.784765 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:26.813210 2088124 cri.go:89] found id: ""
	I1216 04:13:26.813237 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.813246 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:26.813253 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:26.813336 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:26.837930 2088124 cri.go:89] found id: ""
	I1216 04:13:26.837955 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.837963 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:26.837970 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:26.838031 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:26.864344 2088124 cri.go:89] found id: ""
	I1216 04:13:26.864369 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.864378 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:26.864385 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:26.864461 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:26.889169 2088124 cri.go:89] found id: ""
	I1216 04:13:26.889195 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.889207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:26.889214 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:26.889298 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:26.913569 2088124 cri.go:89] found id: ""
	I1216 04:13:26.913596 2088124 logs.go:282] 0 containers: []
	W1216 04:13:26.913604 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:26.913614 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:26.913644 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:26.929642 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:26.929671 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:26.992130 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:26.983971    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.984717    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986365    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986828    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.988289    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:26.983971    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.984717    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986365    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.986828    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:26.988289    2998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:26.992154 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:26.992166 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:27.018253 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:27.018291 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:27.047464 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:27.047492 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:29.603162 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:29.613926 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:29.614005 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:29.639664 2088124 cri.go:89] found id: ""
	I1216 04:13:29.639690 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.639700 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:29.639706 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:29.639773 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:29.664287 2088124 cri.go:89] found id: ""
	I1216 04:13:29.664313 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.664322 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:29.664328 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:29.664391 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:29.715854 2088124 cri.go:89] found id: ""
	I1216 04:13:29.715881 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.715890 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:29.715896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:29.715957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:29.775256 2088124 cri.go:89] found id: ""
	I1216 04:13:29.775283 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.775291 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:29.775298 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:29.775359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:29.800860 2088124 cri.go:89] found id: ""
	I1216 04:13:29.800884 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.800893 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:29.800899 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:29.800966 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:29.826179 2088124 cri.go:89] found id: ""
	I1216 04:13:29.826201 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.826209 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:29.826216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:29.826287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:29.851587 2088124 cri.go:89] found id: ""
	I1216 04:13:29.851657 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.851668 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:29.851675 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:29.851771 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:29.876290 2088124 cri.go:89] found id: ""
	I1216 04:13:29.876317 2088124 logs.go:282] 0 containers: []
	W1216 04:13:29.876327 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:29.876336 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:29.876351 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:29.934758 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:29.934795 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:29.950904 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:29.950934 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:30.063379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:30.052801    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.053836    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.055700    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.056432    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.058410    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:30.052801    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.053836    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.055700    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.056432    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:30.058410    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:30.063402 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:30.063416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:30.093513 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:30.093550 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:32.623683 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:32.634450 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:32.634522 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:32.659386 2088124 cri.go:89] found id: ""
	I1216 04:13:32.659411 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.659419 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:32.659426 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:32.659488 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:32.700370 2088124 cri.go:89] found id: ""
	I1216 04:13:32.700397 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.700406 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:32.700413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:32.700483 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:32.757584 2088124 cri.go:89] found id: ""
	I1216 04:13:32.757606 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.757615 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:32.757621 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:32.757683 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:32.803420 2088124 cri.go:89] found id: ""
	I1216 04:13:32.803445 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.803454 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:32.803460 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:32.803523 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:32.828842 2088124 cri.go:89] found id: ""
	I1216 04:13:32.828866 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.828875 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:32.828881 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:32.828949 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:32.853353 2088124 cri.go:89] found id: ""
	I1216 04:13:32.853380 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.853389 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:32.853398 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:32.853501 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:32.877408 2088124 cri.go:89] found id: ""
	I1216 04:13:32.877435 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.877444 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:32.877451 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:32.877510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:32.901743 2088124 cri.go:89] found id: ""
	I1216 04:13:32.901770 2088124 logs.go:282] 0 containers: []
	W1216 04:13:32.901780 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:32.901790 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:32.901804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:32.967369 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:32.958798    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.959484    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.960984    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.961467    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.962939    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:32.958798    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.959484    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.960984    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.961467    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:32.962939    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:32.967394 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:32.967408 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:32.992952 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:32.992987 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:33.022501 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:33.022532 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:33.078417 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:33.078454 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:35.594569 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:35.607352 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:35.607423 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:35.637370 2088124 cri.go:89] found id: ""
	I1216 04:13:35.637394 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.637403 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:35.637409 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:35.637468 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:35.661404 2088124 cri.go:89] found id: ""
	I1216 04:13:35.661428 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.661437 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:35.661443 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:35.661499 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:35.700087 2088124 cri.go:89] found id: ""
	I1216 04:13:35.700110 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.700118 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:35.700124 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:35.700185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:35.753090 2088124 cri.go:89] found id: ""
	I1216 04:13:35.753163 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.753187 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:35.753207 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:35.753322 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:35.783667 2088124 cri.go:89] found id: ""
	I1216 04:13:35.783693 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.783701 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:35.783707 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:35.783783 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:35.808401 2088124 cri.go:89] found id: ""
	I1216 04:13:35.808426 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.808434 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:35.808457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:35.808518 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:35.832934 2088124 cri.go:89] found id: ""
	I1216 04:13:35.833001 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.833014 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:35.833022 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:35.833080 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:35.857857 2088124 cri.go:89] found id: ""
	I1216 04:13:35.857892 2088124 logs.go:282] 0 containers: []
	W1216 04:13:35.857902 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:35.857911 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:35.857928 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:35.888212 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:35.888240 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:35.944155 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:35.944191 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:35.960968 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:35.960997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:36.037726 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:36.028639    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.029470    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.030504    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.031100    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.033315    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:36.028639    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.029470    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.030504    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.031100    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:36.033315    3350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:36.037753 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:36.037768 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:38.565516 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:38.576078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:38.576153 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:38.603519 2088124 cri.go:89] found id: ""
	I1216 04:13:38.603550 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.603564 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:38.603571 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:38.603642 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:38.630185 2088124 cri.go:89] found id: ""
	I1216 04:13:38.630212 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.630222 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:38.630228 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:38.630295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:38.656496 2088124 cri.go:89] found id: ""
	I1216 04:13:38.656518 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.656527 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:38.656532 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:38.656597 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:38.691354 2088124 cri.go:89] found id: ""
	I1216 04:13:38.691375 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.691384 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:38.691390 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:38.691448 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:38.727377 2088124 cri.go:89] found id: ""
	I1216 04:13:38.727451 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.727476 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:38.727495 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:38.727607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:38.792847 2088124 cri.go:89] found id: ""
	I1216 04:13:38.792924 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.792949 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:38.792969 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:38.793082 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:38.819253 2088124 cri.go:89] found id: ""
	I1216 04:13:38.819326 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.819351 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:38.819369 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:38.819479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:38.844536 2088124 cri.go:89] found id: ""
	I1216 04:13:38.844560 2088124 logs.go:282] 0 containers: []
	W1216 04:13:38.844569 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:38.844578 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:38.844590 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:38.903226 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:38.903264 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:38.919524 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:38.919556 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:38.983586 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:38.974559    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.975311    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.976854    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.977457    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.979078    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:38.974559    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.975311    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.976854    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.977457    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:38.979078    3450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:38.983611 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:38.983625 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:39.009510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:39.009548 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:39.908601 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 04:13:39.971867 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:39.972017 2088124 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:41.538728 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:41.550610 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:41.550686 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:41.580358 2088124 cri.go:89] found id: ""
	I1216 04:13:41.580388 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.580398 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:41.580405 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:41.580476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:41.609251 2088124 cri.go:89] found id: ""
	I1216 04:13:41.609323 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.609346 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:41.609360 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:41.609437 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:41.634677 2088124 cri.go:89] found id: ""
	I1216 04:13:41.634714 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.634724 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:41.634731 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:41.634811 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:41.660492 2088124 cri.go:89] found id: ""
	I1216 04:13:41.660531 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.660541 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:41.660555 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:41.660624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:41.706922 2088124 cri.go:89] found id: ""
	I1216 04:13:41.706958 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.706967 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:41.706974 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:41.707062 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:41.771121 2088124 cri.go:89] found id: ""
	I1216 04:13:41.771150 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.771160 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:41.771167 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:41.771228 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:41.798371 2088124 cri.go:89] found id: ""
	I1216 04:13:41.798409 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.798418 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:41.798424 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:41.798505 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:41.825080 2088124 cri.go:89] found id: ""
	I1216 04:13:41.825108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:41.825118 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:41.825128 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:41.825142 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:41.881228 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:41.881264 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:41.897224 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:41.897252 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:41.962985 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:41.954556    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.955066    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.956801    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.957150    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.958760    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:41.954556    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.955066    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.956801    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.957150    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:41.958760    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:41.963011 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:41.963024 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:41.988969 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:41.989006 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:44.532418 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:44.542803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:44.542915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:44.568416 2088124 cri.go:89] found id: ""
	I1216 04:13:44.568439 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.568457 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:44.568463 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:44.568522 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:44.594143 2088124 cri.go:89] found id: ""
	I1216 04:13:44.594169 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.594179 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:44.594186 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:44.594247 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:44.618788 2088124 cri.go:89] found id: ""
	I1216 04:13:44.618819 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.618828 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:44.618835 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:44.618895 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:44.644302 2088124 cri.go:89] found id: ""
	I1216 04:13:44.644325 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.644333 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:44.644340 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:44.644398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:44.669819 2088124 cri.go:89] found id: ""
	I1216 04:13:44.669842 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.669849 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:44.669855 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:44.669924 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:44.725552 2088124 cri.go:89] found id: ""
	I1216 04:13:44.725575 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.725583 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:44.725589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:44.725650 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:44.765386 2088124 cri.go:89] found id: ""
	I1216 04:13:44.765408 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.765426 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:44.765432 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:44.765491 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:44.793682 2088124 cri.go:89] found id: ""
	I1216 04:13:44.793763 2088124 logs.go:282] 0 containers: []
	W1216 04:13:44.793788 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:44.793827 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:44.793857 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:44.852432 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:44.852473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:44.868492 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:44.868520 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:44.931865 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:44.923147    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.923927    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.925715    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.926197    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.927722    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:44.923147    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.923927    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.925715    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.926197    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:44.927722    3678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:44.931889 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:44.931903 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:44.957522 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:44.957557 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:47.485499 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:47.496279 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:47.496356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:47.520654 2088124 cri.go:89] found id: ""
	I1216 04:13:47.520681 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.520690 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:47.520696 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:47.520761 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:47.551944 2088124 cri.go:89] found id: ""
	I1216 04:13:47.551978 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.551987 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:47.552001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:47.552065 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:47.578411 2088124 cri.go:89] found id: ""
	I1216 04:13:47.578438 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.578450 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:47.578457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:47.578519 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:47.604018 2088124 cri.go:89] found id: ""
	I1216 04:13:47.604041 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.604049 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:47.604055 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:47.604112 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:47.629467 2088124 cri.go:89] found id: ""
	I1216 04:13:47.629491 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.629499 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:47.629506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:47.629567 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:47.658252 2088124 cri.go:89] found id: ""
	I1216 04:13:47.658280 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.658289 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:47.658295 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:47.658362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:47.683444 2088124 cri.go:89] found id: ""
	I1216 04:13:47.683472 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.683481 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:47.683487 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:47.683548 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:47.745597 2088124 cri.go:89] found id: ""
	I1216 04:13:47.745620 2088124 logs.go:282] 0 containers: []
	W1216 04:13:47.745629 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:47.745638 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:47.745650 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:47.788108 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:47.788134 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:47.844259 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:47.844292 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:47.860046 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:47.860078 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:47.931100 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:47.922584    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.923485    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.924699    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.925424    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.927031    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:47.922584    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.923485    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.924699    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.925424    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:47.927031    3801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:47.931125 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:47.931139 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:50.458157 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:50.468844 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:50.468915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:50.493698 2088124 cri.go:89] found id: ""
	I1216 04:13:50.493725 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.493735 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:50.493741 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:50.493799 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:50.518623 2088124 cri.go:89] found id: ""
	I1216 04:13:50.518652 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.518664 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:50.518671 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:50.518737 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:50.543940 2088124 cri.go:89] found id: ""
	I1216 04:13:50.543969 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.543978 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:50.543984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:50.544043 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:50.570246 2088124 cri.go:89] found id: ""
	I1216 04:13:50.570283 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.570292 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:50.570299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:50.570374 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:50.596855 2088124 cri.go:89] found id: ""
	I1216 04:13:50.596884 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.596893 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:50.596900 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:50.596965 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:50.622325 2088124 cri.go:89] found id: ""
	I1216 04:13:50.622352 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.622361 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:50.622368 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:50.622428 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:50.647658 2088124 cri.go:89] found id: ""
	I1216 04:13:50.647683 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.647691 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:50.647698 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:50.647760 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:50.672119 2088124 cri.go:89] found id: ""
	I1216 04:13:50.672156 2088124 logs.go:282] 0 containers: []
	W1216 04:13:50.672166 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:50.672176 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:50.672187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:50.741830 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:50.741871 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:50.758886 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:50.758917 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:50.843759 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:50.834975    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.835406    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837202    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837673    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.839159    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:50.834975    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.835406    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837202    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.837673    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:50.839159    3905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:50.843782 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:50.843795 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:50.870242 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:50.870278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:51.134849 2088124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 04:13:51.199925 2088124 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:13:51.200071 2088124 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:13:51.205122 2088124 out.go:179] * Enabled addons: 
	I1216 04:13:51.208001 2088124 addons.go:530] duration metric: took 1m54.35940748s for enable addons: enabled=[]
	I1216 04:13:53.399835 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:53.410221 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:53.410292 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:53.442995 2088124 cri.go:89] found id: ""
	I1216 04:13:53.443019 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.443028 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:53.443034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:53.443119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:53.469085 2088124 cri.go:89] found id: ""
	I1216 04:13:53.469108 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.469116 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:53.469122 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:53.469185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:53.492673 2088124 cri.go:89] found id: ""
	I1216 04:13:53.492741 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.492764 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:53.492778 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:53.492851 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:53.519461 2088124 cri.go:89] found id: ""
	I1216 04:13:53.519484 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.519493 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:53.519499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:53.519559 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:53.544555 2088124 cri.go:89] found id: ""
	I1216 04:13:53.544578 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.544587 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:53.544593 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:53.544655 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:53.570476 2088124 cri.go:89] found id: ""
	I1216 04:13:53.570499 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.570508 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:53.570514 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:53.570576 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:53.598792 2088124 cri.go:89] found id: ""
	I1216 04:13:53.598814 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.598822 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:53.598828 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:53.598894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:53.627454 2088124 cri.go:89] found id: ""
	I1216 04:13:53.627477 2088124 logs.go:282] 0 containers: []
	W1216 04:13:53.627485 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:53.627494 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:53.627505 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:53.684461 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:53.684541 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:53.709962 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:53.710041 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:53.803419 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:53.793865    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.794942    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796672    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796978    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.798420    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:53.793865    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.794942    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796672    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.796978    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:53.798420    4026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:53.803444 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:53.803462 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:53.829615 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:53.829652 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:56.358195 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:56.368722 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:56.368794 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:56.393334 2088124 cri.go:89] found id: ""
	I1216 04:13:56.393358 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.393367 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:56.393373 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:56.393440 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:56.417912 2088124 cri.go:89] found id: ""
	I1216 04:13:56.417935 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.417944 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:56.417983 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:56.418062 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:56.445420 2088124 cri.go:89] found id: ""
	I1216 04:13:56.445451 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.445461 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:56.445467 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:56.445526 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:56.469454 2088124 cri.go:89] found id: ""
	I1216 04:13:56.469478 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.469487 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:56.469493 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:56.469552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:56.494121 2088124 cri.go:89] found id: ""
	I1216 04:13:56.494145 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.494153 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:56.494165 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:56.494225 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:56.517578 2088124 cri.go:89] found id: ""
	I1216 04:13:56.517602 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.517611 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:56.517637 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:56.517700 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:56.544866 2088124 cri.go:89] found id: ""
	I1216 04:13:56.544891 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.544899 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:56.544941 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:56.545022 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:56.573759 2088124 cri.go:89] found id: ""
	I1216 04:13:56.573787 2088124 logs.go:282] 0 containers: []
	W1216 04:13:56.573796 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:56.573805 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:56.573817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:13:56.599163 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:56.599202 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:56.630921 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:56.630948 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:56.688477 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:56.688553 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:56.720603 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:56.720634 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:56.828200 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:56.818507    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.819366    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821131    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821683    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.823608    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:56.818507    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.819366    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821131    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.821683    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:56.823608    4154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:59.328466 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:13:59.339589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:13:59.339664 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:13:59.364346 2088124 cri.go:89] found id: ""
	I1216 04:13:59.364373 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.364382 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:13:59.364389 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:13:59.364494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:13:59.393412 2088124 cri.go:89] found id: ""
	I1216 04:13:59.393480 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.393503 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:13:59.393516 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:13:59.393590 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:13:59.422012 2088124 cri.go:89] found id: ""
	I1216 04:13:59.422039 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.422048 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:13:59.422055 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:13:59.422111 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:13:59.447252 2088124 cri.go:89] found id: ""
	I1216 04:13:59.447280 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.447289 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:13:59.447301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:13:59.447362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:13:59.473224 2088124 cri.go:89] found id: ""
	I1216 04:13:59.473253 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.473262 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:13:59.473269 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:13:59.473333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:13:59.498117 2088124 cri.go:89] found id: ""
	I1216 04:13:59.498142 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.498151 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:13:59.498157 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:13:59.498218 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:13:59.531960 2088124 cri.go:89] found id: ""
	I1216 04:13:59.531983 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.531992 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:13:59.531998 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:13:59.532064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:13:59.555530 2088124 cri.go:89] found id: ""
	I1216 04:13:59.555557 2088124 logs.go:282] 0 containers: []
	W1216 04:13:59.555567 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:13:59.555586 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:13:59.555597 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:13:59.587567 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:13:59.587594 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:13:59.642770 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:13:59.642808 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:13:59.658670 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:13:59.658698 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:13:59.758071 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:13:59.747797    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.748997    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.749964    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.751637    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.752201    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:13:59.747797    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.748997    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.749964    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.751637    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:13:59.752201    4259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:13:59.758096 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:13:59.758109 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:02.297267 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:02.308025 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:02.308094 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:02.332912 2088124 cri.go:89] found id: ""
	I1216 04:14:02.332938 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.332947 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:02.332953 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:02.333015 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:02.358723 2088124 cri.go:89] found id: ""
	I1216 04:14:02.358746 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.358754 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:02.358760 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:02.358820 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:02.384845 2088124 cri.go:89] found id: ""
	I1216 04:14:02.384869 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.384878 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:02.384884 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:02.384947 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:02.411300 2088124 cri.go:89] found id: ""
	I1216 04:14:02.411327 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.411337 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:02.411343 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:02.411401 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:02.436448 2088124 cri.go:89] found id: ""
	I1216 04:14:02.436490 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.436500 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:02.436506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:02.436568 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:02.462003 2088124 cri.go:89] found id: ""
	I1216 04:14:02.462030 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.462039 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:02.462045 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:02.462115 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:02.487374 2088124 cri.go:89] found id: ""
	I1216 04:14:02.487398 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.487407 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:02.487414 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:02.487473 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:02.513515 2088124 cri.go:89] found id: ""
	I1216 04:14:02.513541 2088124 logs.go:282] 0 containers: []
	W1216 04:14:02.513549 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:02.513559 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:02.513574 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:02.569398 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:02.569439 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:02.585943 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:02.585986 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:02.652956 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:02.644316    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.645186    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.646890    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.647275    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.648908    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:02.644316    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.645186    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.646890    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.647275    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:02.648908    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:02.653021 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:02.653040 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:02.678261 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:02.678296 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:05.269784 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:05.280500 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:05.280584 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:05.305398 2088124 cri.go:89] found id: ""
	I1216 04:14:05.305424 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.305432 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:05.305439 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:05.305498 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:05.331233 2088124 cri.go:89] found id: ""
	I1216 04:14:05.331256 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.331264 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:05.331270 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:05.331329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:05.356501 2088124 cri.go:89] found id: ""
	I1216 04:14:05.356527 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.356537 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:05.356543 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:05.356605 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:05.383678 2088124 cri.go:89] found id: ""
	I1216 04:14:05.383706 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.383714 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:05.383720 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:05.383819 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:05.408800 2088124 cri.go:89] found id: ""
	I1216 04:14:05.408826 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.408835 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:05.408842 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:05.408900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:05.437636 2088124 cri.go:89] found id: ""
	I1216 04:14:05.437664 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.437673 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:05.437680 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:05.437738 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:05.463588 2088124 cri.go:89] found id: ""
	I1216 04:14:05.463619 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.463628 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:05.463635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:05.463707 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:05.492371 2088124 cri.go:89] found id: ""
	I1216 04:14:05.492399 2088124 logs.go:282] 0 containers: []
	W1216 04:14:05.492409 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:05.492418 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:05.492430 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:05.548250 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:05.548287 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:05.564063 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:05.564088 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:05.632904 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:05.624351    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.625146    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.626904    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.627499    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.629025    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:05.624351    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.625146    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.626904    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.627499    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:05.629025    4468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:05.632926 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:05.632939 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:05.659343 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:05.659376 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:08.201168 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:08.211739 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:08.211822 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:08.236069 2088124 cri.go:89] found id: ""
	I1216 04:14:08.236097 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.236106 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:08.236118 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:08.236177 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:08.261051 2088124 cri.go:89] found id: ""
	I1216 04:14:08.261075 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.261083 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:08.261089 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:08.261150 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:08.285569 2088124 cri.go:89] found id: ""
	I1216 04:14:08.285592 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.285600 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:08.285606 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:08.285667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:08.311218 2088124 cri.go:89] found id: ""
	I1216 04:14:08.311258 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.311266 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:08.311273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:08.311366 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:08.345673 2088124 cri.go:89] found id: ""
	I1216 04:14:08.345697 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.345706 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:08.345713 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:08.345776 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:08.370418 2088124 cri.go:89] found id: ""
	I1216 04:14:08.370441 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.370449 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:08.370456 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:08.370513 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:08.395107 2088124 cri.go:89] found id: ""
	I1216 04:14:08.395170 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.395196 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:08.395215 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:08.395299 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:08.419032 2088124 cri.go:89] found id: ""
	I1216 04:14:08.419085 2088124 logs.go:282] 0 containers: []
	W1216 04:14:08.419094 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:08.419104 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:08.419115 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:08.475411 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:08.475448 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:08.491357 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:08.491391 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:08.557388 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:08.548702    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.549457    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551263    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551890    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.553470    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:08.548702    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.549457    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551263    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.551890    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:08.553470    4582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:08.557412 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:08.557426 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:08.582743 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:08.582777 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:11.111145 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:11.123009 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:11.123095 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:11.150909 2088124 cri.go:89] found id: ""
	I1216 04:14:11.150934 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.150942 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:11.150949 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:11.151075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:11.182574 2088124 cri.go:89] found id: ""
	I1216 04:14:11.182600 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.182610 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:11.182616 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:11.182719 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:11.208283 2088124 cri.go:89] found id: ""
	I1216 04:14:11.208310 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.208319 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:11.208325 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:11.208417 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:11.237024 2088124 cri.go:89] found id: ""
	I1216 04:14:11.237052 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.237061 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:11.237069 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:11.237132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:11.265167 2088124 cri.go:89] found id: ""
	I1216 04:14:11.265189 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.265197 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:11.265203 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:11.265261 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:11.290122 2088124 cri.go:89] found id: ""
	I1216 04:14:11.290144 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.290152 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:11.290159 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:11.290217 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:11.317188 2088124 cri.go:89] found id: ""
	I1216 04:14:11.317211 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.317219 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:11.317225 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:11.317304 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:11.342140 2088124 cri.go:89] found id: ""
	I1216 04:14:11.342164 2088124 logs.go:282] 0 containers: []
	W1216 04:14:11.342173 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:11.342206 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:11.342225 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:11.368021 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:11.368058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:11.397287 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:11.397318 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:11.453124 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:11.453158 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:11.468881 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:11.468910 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:11.535360 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:11.526322    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.526895    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.528582    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.529258    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.530769    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:11.526322    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.526895    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.528582    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.529258    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:11.530769    4710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:14.036278 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:14.046954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:14.047104 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:14.072898 2088124 cri.go:89] found id: ""
	I1216 04:14:14.072923 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.072932 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:14.072938 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:14.072998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:14.098005 2088124 cri.go:89] found id: ""
	I1216 04:14:14.098041 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.098049 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:14.098056 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:14.098123 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:14.125919 2088124 cri.go:89] found id: ""
	I1216 04:14:14.125945 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.125954 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:14.125961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:14.126068 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:14.151392 2088124 cri.go:89] found id: ""
	I1216 04:14:14.151416 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.151424 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:14.151430 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:14.151494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:14.181023 2088124 cri.go:89] found id: ""
	I1216 04:14:14.181054 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.181064 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:14.181070 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:14.181139 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:14.206141 2088124 cri.go:89] found id: ""
	I1216 04:14:14.206166 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.206175 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:14.206181 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:14.206250 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:14.230051 2088124 cri.go:89] found id: ""
	I1216 04:14:14.230084 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.230093 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:14.230098 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:14.230183 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:14.255362 2088124 cri.go:89] found id: ""
	I1216 04:14:14.255388 2088124 logs.go:282] 0 containers: []
	W1216 04:14:14.255412 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:14.255423 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:14.255434 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:14.310536 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:14.310573 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:14.326390 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:14.326478 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:14.389470 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:14.380411    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.381325    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383077    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383571    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.385261    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:14.380411    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.381325    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383077    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.383571    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:14.385261    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:14.389493 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:14.389512 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:14.415767 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:14.415804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:16.946959 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:16.978797 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:16.978873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:17.023928 2088124 cri.go:89] found id: ""
	I1216 04:14:17.024005 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.024022 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:17.024030 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:17.024092 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:17.049994 2088124 cri.go:89] found id: ""
	I1216 04:14:17.050024 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.050033 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:17.050040 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:17.050122 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:17.075095 2088124 cri.go:89] found id: ""
	I1216 04:14:17.075120 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.075128 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:17.075134 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:17.075195 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:17.103161 2088124 cri.go:89] found id: ""
	I1216 04:14:17.103189 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.103209 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:17.103216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:17.103687 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:17.139217 2088124 cri.go:89] found id: ""
	I1216 04:14:17.139246 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.139255 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:17.139261 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:17.139325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:17.170063 2088124 cri.go:89] found id: ""
	I1216 04:14:17.170091 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.170102 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:17.170108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:17.170186 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:17.195843 2088124 cri.go:89] found id: ""
	I1216 04:14:17.195869 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.195879 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:17.195885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:17.195966 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:17.221935 2088124 cri.go:89] found id: ""
	I1216 04:14:17.221962 2088124 logs.go:282] 0 containers: []
	W1216 04:14:17.221971 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:17.222001 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:17.222019 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:17.278612 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:17.278650 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:17.295004 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:17.295076 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:17.359742 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:17.351174    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.351977    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.353668    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.354101    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.355803    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:17.351174    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.351977    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.353668    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.354101    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:17.355803    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:17.359766 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:17.359779 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:17.385281 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:17.385316 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:19.913504 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:19.924126 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:19.924223 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:19.981102 2088124 cri.go:89] found id: ""
	I1216 04:14:19.981182 2088124 logs.go:282] 0 containers: []
	W1216 04:14:19.981204 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:19.981223 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:19.981319 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:20.025801 2088124 cri.go:89] found id: ""
	I1216 04:14:20.025875 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.025897 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:20.025918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:20.026010 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:20.057062 2088124 cri.go:89] found id: ""
	I1216 04:14:20.057088 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.057097 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:20.057103 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:20.057168 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:20.082749 2088124 cri.go:89] found id: ""
	I1216 04:14:20.082774 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.082783 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:20.082790 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:20.082854 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:20.109626 2088124 cri.go:89] found id: ""
	I1216 04:14:20.109653 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.109663 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:20.109670 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:20.109731 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:20.134934 2088124 cri.go:89] found id: ""
	I1216 04:14:20.134957 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.134980 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:20.134988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:20.135088 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:20.161170 2088124 cri.go:89] found id: ""
	I1216 04:14:20.161197 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.161206 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:20.161213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:20.161299 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:20.187553 2088124 cri.go:89] found id: ""
	I1216 04:14:20.187578 2088124 logs.go:282] 0 containers: []
	W1216 04:14:20.187587 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:20.187597 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:20.187629 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:20.255987 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:20.246960    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.247520    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.249493    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.250032    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.251482    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:20.246960    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.247520    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.249493    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.250032    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:20.251482    5033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:20.256011 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:20.256024 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:20.281257 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:20.281331 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:20.310693 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:20.310724 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:20.367395 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:20.367436 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:22.883831 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:22.894924 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:22.894999 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:22.920332 2088124 cri.go:89] found id: ""
	I1216 04:14:22.920359 2088124 logs.go:282] 0 containers: []
	W1216 04:14:22.920379 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:22.920386 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:22.920445 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:22.977215 2088124 cri.go:89] found id: ""
	I1216 04:14:22.977243 2088124 logs.go:282] 0 containers: []
	W1216 04:14:22.977252 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:22.977258 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:22.977317 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:23.028698 2088124 cri.go:89] found id: ""
	I1216 04:14:23.028723 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.028732 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:23.028739 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:23.028804 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:23.055098 2088124 cri.go:89] found id: ""
	I1216 04:14:23.055124 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.055133 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:23.055140 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:23.055209 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:23.080450 2088124 cri.go:89] found id: ""
	I1216 04:14:23.080483 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.080493 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:23.080499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:23.080559 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:23.105251 2088124 cri.go:89] found id: ""
	I1216 04:14:23.105275 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.105284 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:23.105296 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:23.105355 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:23.130544 2088124 cri.go:89] found id: ""
	I1216 04:14:23.130573 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.130588 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:23.130594 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:23.130653 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:23.155787 2088124 cri.go:89] found id: ""
	I1216 04:14:23.155863 2088124 logs.go:282] 0 containers: []
	W1216 04:14:23.155879 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:23.155889 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:23.155901 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:23.184285 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:23.184315 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:23.240021 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:23.240058 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:23.255934 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:23.255969 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:23.324390 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:23.316382    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.316885    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.318697    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.319197    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.320361    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:23.316382    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.316885    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.318697    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.319197    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:23.320361    5162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:23.324415 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:23.324432 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:25.850349 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:25.861084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:25.861157 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:25.885912 2088124 cri.go:89] found id: ""
	I1216 04:14:25.885939 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.885947 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:25.885954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:25.886015 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:25.914385 2088124 cri.go:89] found id: ""
	I1216 04:14:25.914408 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.914416 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:25.914422 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:25.914482 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:25.957379 2088124 cri.go:89] found id: ""
	I1216 04:14:25.957406 2088124 logs.go:282] 0 containers: []
	W1216 04:14:25.957415 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:25.957421 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:25.957480 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:26.020008 2088124 cri.go:89] found id: ""
	I1216 04:14:26.020036 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.020045 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:26.020051 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:26.020118 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:26.047424 2088124 cri.go:89] found id: ""
	I1216 04:14:26.047452 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.047461 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:26.047468 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:26.047534 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:26.073161 2088124 cri.go:89] found id: ""
	I1216 04:14:26.073187 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.073208 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:26.073216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:26.073277 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:26.103238 2088124 cri.go:89] found id: ""
	I1216 04:14:26.103260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.103268 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:26.103274 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:26.103337 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:26.128964 2088124 cri.go:89] found id: ""
	I1216 04:14:26.128993 2088124 logs.go:282] 0 containers: []
	W1216 04:14:26.129004 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:26.129013 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:26.129025 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:26.185309 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:26.185350 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:26.201116 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:26.201191 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:26.261346 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:26.253589    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.254367    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255430    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255945    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.257599    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:26.253589    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.254367    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255430    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.255945    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:26.257599    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:26.261367 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:26.261379 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:26.286659 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:26.286693 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:28.816260 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:28.826799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:28.826873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:28.851396 2088124 cri.go:89] found id: ""
	I1216 04:14:28.851425 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.851435 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:28.851441 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:28.851503 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:28.875518 2088124 cri.go:89] found id: ""
	I1216 04:14:28.875541 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.875550 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:28.875556 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:28.875614 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:28.904430 2088124 cri.go:89] found id: ""
	I1216 04:14:28.904454 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.904462 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:28.904476 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:28.904537 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:28.929129 2088124 cri.go:89] found id: ""
	I1216 04:14:28.929153 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.929162 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:28.929169 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:28.929228 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:28.966014 2088124 cri.go:89] found id: ""
	I1216 04:14:28.966042 2088124 logs.go:282] 0 containers: []
	W1216 04:14:28.966051 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:28.966057 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:28.966123 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:29.025945 2088124 cri.go:89] found id: ""
	I1216 04:14:29.025972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.025988 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:29.025995 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:29.026064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:29.051899 2088124 cri.go:89] found id: ""
	I1216 04:14:29.051935 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.051946 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:29.051952 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:29.052023 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:29.080317 2088124 cri.go:89] found id: ""
	I1216 04:14:29.080341 2088124 logs.go:282] 0 containers: []
	W1216 04:14:29.080351 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:29.080361 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:29.080373 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:29.135930 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:29.135967 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:29.154187 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:29.154216 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:29.221073 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:29.212783    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.213403    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.214843    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.215170    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.216622    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:29.212783    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.213403    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.214843    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.215170    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:29.216622    5370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:29.221096 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:29.221111 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:29.246641 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:29.246676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:31.779202 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:31.790954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:31.791029 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:31.817812 2088124 cri.go:89] found id: ""
	I1216 04:14:31.817897 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.817925 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:31.817946 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:31.818067 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:31.842726 2088124 cri.go:89] found id: ""
	I1216 04:14:31.842753 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.842762 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:31.842769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:31.842832 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:31.868497 2088124 cri.go:89] found id: ""
	I1216 04:14:31.868523 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.868532 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:31.868538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:31.868602 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:31.898624 2088124 cri.go:89] found id: ""
	I1216 04:14:31.898646 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.898655 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:31.898662 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:31.898720 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:31.924967 2088124 cri.go:89] found id: ""
	I1216 04:14:31.924993 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.925003 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:31.925011 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:31.925074 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:31.966946 2088124 cri.go:89] found id: ""
	I1216 04:14:31.966972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.966981 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:31.966988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:31.967075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:31.999136 2088124 cri.go:89] found id: ""
	I1216 04:14:31.999162 2088124 logs.go:282] 0 containers: []
	W1216 04:14:31.999170 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:31.999177 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:31.999248 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:32.037224 2088124 cri.go:89] found id: ""
	I1216 04:14:32.037260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:32.037269 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:32.037280 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:32.037292 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:32.098221 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:32.098257 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:32.114315 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:32.114346 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:32.179522 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:32.170571    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.171240    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.172913    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.173537    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.175422    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:32.170571    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.171240    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.172913    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.173537    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:32.175422    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:32.179546 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:32.179598 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:32.205901 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:32.205937 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:34.736487 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:34.747033 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:34.747125 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:34.774783 2088124 cri.go:89] found id: ""
	I1216 04:14:34.774808 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.774817 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:34.774826 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:34.774892 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:34.804248 2088124 cri.go:89] found id: ""
	I1216 04:14:34.804272 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.804281 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:34.804294 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:34.804356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:34.829461 2088124 cri.go:89] found id: ""
	I1216 04:14:34.829485 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.829493 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:34.829499 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:34.829560 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:34.857116 2088124 cri.go:89] found id: ""
	I1216 04:14:34.857141 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.857151 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:34.857157 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:34.857219 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:34.882336 2088124 cri.go:89] found id: ""
	I1216 04:14:34.882359 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.882367 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:34.882373 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:34.882434 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:34.907931 2088124 cri.go:89] found id: ""
	I1216 04:14:34.907954 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.907962 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:34.907969 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:34.908027 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:34.956047 2088124 cri.go:89] found id: ""
	I1216 04:14:34.956069 2088124 logs.go:282] 0 containers: []
	W1216 04:14:34.956077 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:34.956084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:34.956145 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:35.024159 2088124 cri.go:89] found id: ""
	I1216 04:14:35.024183 2088124 logs.go:282] 0 containers: []
	W1216 04:14:35.024197 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:35.024207 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:35.024218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:35.052560 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:35.052632 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:35.120169 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:35.109992    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.110996    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.112972    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.113360    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.115151    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:35.109992    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.110996    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.112972    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.113360    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:35.115151    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:35.120193 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:35.120206 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:35.148539 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:35.148572 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:35.177137 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:35.177163 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:37.736828 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:37.748034 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:37.748119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:37.774072 2088124 cri.go:89] found id: ""
	I1216 04:14:37.774096 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.774105 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:37.774113 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:37.774174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:37.798854 2088124 cri.go:89] found id: ""
	I1216 04:14:37.798879 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.798887 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:37.798893 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:37.798953 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:37.824863 2088124 cri.go:89] found id: ""
	I1216 04:14:37.824889 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.824898 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:37.824905 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:37.824995 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:37.849318 2088124 cri.go:89] found id: ""
	I1216 04:14:37.849340 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.849348 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:37.849354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:37.849418 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:37.874246 2088124 cri.go:89] found id: ""
	I1216 04:14:37.874269 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.874277 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:37.874285 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:37.874343 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:37.900978 2088124 cri.go:89] found id: ""
	I1216 04:14:37.901002 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.901010 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:37.901016 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:37.901076 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:37.929331 2088124 cri.go:89] found id: ""
	I1216 04:14:37.929360 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.929370 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:37.929376 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:37.929440 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:37.969527 2088124 cri.go:89] found id: ""
	I1216 04:14:37.969556 2088124 logs.go:282] 0 containers: []
	W1216 04:14:37.969564 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:37.969573 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:37.969585 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:38.009528 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:38.009566 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:38.055850 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:38.055880 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:38.113260 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:38.113301 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:38.129810 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:38.129846 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:38.195392 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:38.187231    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.187971    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.189569    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.190023    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.191590    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:38.187231    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.187971    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.189569    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.190023    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:38.191590    5718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:40.695695 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:40.706489 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:40.706566 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:40.733370 2088124 cri.go:89] found id: ""
	I1216 04:14:40.733400 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.733409 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:40.733416 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:40.733476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:40.760997 2088124 cri.go:89] found id: ""
	I1216 04:14:40.761027 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.761037 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:40.761043 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:40.761106 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:40.785757 2088124 cri.go:89] found id: ""
	I1216 04:14:40.785785 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.785793 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:40.785799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:40.785859 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:40.810917 2088124 cri.go:89] found id: ""
	I1216 04:14:40.810946 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.810954 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:40.810961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:40.811021 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:40.837261 2088124 cri.go:89] found id: ""
	I1216 04:14:40.837289 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.837298 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:40.837306 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:40.837367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:40.865095 2088124 cri.go:89] found id: ""
	I1216 04:14:40.865124 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.865133 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:40.865139 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:40.865197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:40.893132 2088124 cri.go:89] found id: ""
	I1216 04:14:40.893156 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.893164 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:40.893170 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:40.893230 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:40.917368 2088124 cri.go:89] found id: ""
	I1216 04:14:40.917390 2088124 logs.go:282] 0 containers: []
	W1216 04:14:40.917398 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:40.917407 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:40.917418 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:40.988706 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:40.988789 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:41.026114 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:41.026141 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:41.097192 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:41.088410    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.089258    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.090961    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.091663    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.093296    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:41.088410    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.089258    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.090961    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.091663    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:41.093296    5816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:41.097218 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:41.097232 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:41.122894 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:41.122929 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:43.655609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:43.666076 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:43.666148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:43.697517 2088124 cri.go:89] found id: ""
	I1216 04:14:43.697542 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.697550 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:43.697557 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:43.697617 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:43.722700 2088124 cri.go:89] found id: ""
	I1216 04:14:43.722727 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.722737 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:43.722743 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:43.722811 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:43.751469 2088124 cri.go:89] found id: ""
	I1216 04:14:43.751496 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.751509 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:43.751516 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:43.751577 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:43.776779 2088124 cri.go:89] found id: ""
	I1216 04:14:43.776804 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.776812 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:43.776818 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:43.776876 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:43.801004 2088124 cri.go:89] found id: ""
	I1216 04:14:43.801028 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.801037 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:43.801044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:43.801131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:43.825723 2088124 cri.go:89] found id: ""
	I1216 04:14:43.825747 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.825756 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:43.825763 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:43.825823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:43.854440 2088124 cri.go:89] found id: ""
	I1216 04:14:43.854464 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.854473 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:43.854479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:43.854537 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:43.881228 2088124 cri.go:89] found id: ""
	I1216 04:14:43.881251 2088124 logs.go:282] 0 containers: []
	W1216 04:14:43.881261 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:43.881270 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:43.881282 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:43.908258 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:43.908330 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:43.975235 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:43.975273 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:44.032765 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:44.032798 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:44.097769 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:44.088888    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.089678    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.091397    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.092058    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.093573    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:44.088888    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.089678    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.091397    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.092058    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:44.093573    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:44.097791 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:44.097814 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:46.624214 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:46.634860 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:46.634939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:46.662490 2088124 cri.go:89] found id: ""
	I1216 04:14:46.662518 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.662528 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:46.662534 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:46.662598 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:46.687532 2088124 cri.go:89] found id: ""
	I1216 04:14:46.687558 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.687567 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:46.687574 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:46.687639 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:46.711951 2088124 cri.go:89] found id: ""
	I1216 04:14:46.711978 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.711988 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:46.711994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:46.712054 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:46.742207 2088124 cri.go:89] found id: ""
	I1216 04:14:46.742241 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.742250 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:46.742257 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:46.742331 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:46.766943 2088124 cri.go:89] found id: ""
	I1216 04:14:46.766972 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.766981 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:46.766988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:46.767070 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:46.792400 2088124 cri.go:89] found id: ""
	I1216 04:14:46.792432 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.792442 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:46.792455 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:46.792533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:46.817511 2088124 cri.go:89] found id: ""
	I1216 04:14:46.817533 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.817542 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:46.817548 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:46.817610 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:46.845432 2088124 cri.go:89] found id: ""
	I1216 04:14:46.845455 2088124 logs.go:282] 0 containers: []
	W1216 04:14:46.845464 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:46.845473 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:46.845484 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:46.901017 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:46.901050 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:46.916980 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:46.917012 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:47.034196 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:47.019727    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.020515    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022325    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022651    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.026596    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:47.019727    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.020515    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022325    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.022651    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:47.026596    6034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:47.034216 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:47.034230 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:47.060131 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:47.060167 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:49.592378 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:49.603274 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:49.603390 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:49.628592 2088124 cri.go:89] found id: ""
	I1216 04:14:49.628617 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.628626 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:49.628632 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:49.628693 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:49.654951 2088124 cri.go:89] found id: ""
	I1216 04:14:49.654974 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.654983 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:49.654990 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:49.655079 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:49.680966 2088124 cri.go:89] found id: ""
	I1216 04:14:49.680992 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.681004 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:49.681011 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:49.681077 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:49.705520 2088124 cri.go:89] found id: ""
	I1216 04:14:49.705549 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.705558 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:49.705565 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:49.705624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:49.735615 2088124 cri.go:89] found id: ""
	I1216 04:14:49.735643 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.735653 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:49.735660 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:49.735723 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:49.761693 2088124 cri.go:89] found id: ""
	I1216 04:14:49.761721 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.761730 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:49.761736 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:49.761799 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:49.786810 2088124 cri.go:89] found id: ""
	I1216 04:14:49.786852 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.786866 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:49.786875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:49.786943 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:49.815183 2088124 cri.go:89] found id: ""
	I1216 04:14:49.815209 2088124 logs.go:282] 0 containers: []
	W1216 04:14:49.815218 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:49.815236 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:49.815247 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:49.870316 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:49.870351 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:49.886698 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:49.886724 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:50.017086 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:49.989874    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.000829    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.004526    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.006532    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.011272    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:49.989874    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.000829    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.004526    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.006532    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:50.011272    6150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:50.017115 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:50.017137 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:50.046781 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:50.046822 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:52.580326 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:52.591108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:52.591184 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:52.619853 2088124 cri.go:89] found id: ""
	I1216 04:14:52.619876 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.619884 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:52.619891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:52.619973 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:52.644168 2088124 cri.go:89] found id: ""
	I1216 04:14:52.644191 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.644199 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:52.644205 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:52.644266 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:52.669818 2088124 cri.go:89] found id: ""
	I1216 04:14:52.669842 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.669850 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:52.669856 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:52.669916 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:52.695228 2088124 cri.go:89] found id: ""
	I1216 04:14:52.695252 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.695260 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:52.695267 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:52.695329 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:52.720235 2088124 cri.go:89] found id: ""
	I1216 04:14:52.720260 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.720269 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:52.720275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:52.720339 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:52.749551 2088124 cri.go:89] found id: ""
	I1216 04:14:52.749574 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.749582 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:52.749589 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:52.749651 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:52.776351 2088124 cri.go:89] found id: ""
	I1216 04:14:52.776375 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.776383 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:52.776389 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:52.776450 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:52.805147 2088124 cri.go:89] found id: ""
	I1216 04:14:52.805175 2088124 logs.go:282] 0 containers: []
	W1216 04:14:52.805185 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:52.805195 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:52.805211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:52.831059 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:52.831098 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:52.861113 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:52.861143 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:52.916847 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:52.916883 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:52.933489 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:52.933517 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:53.043697 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:53.034203    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.035135    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.036921    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.037478    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.039232    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:53.034203    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.035135    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.036921    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.037478    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:53.039232    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:55.544026 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:55.554861 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:55.554956 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:55.578474 2088124 cri.go:89] found id: ""
	I1216 04:14:55.578502 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.578511 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:55.578518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:55.578633 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:55.602756 2088124 cri.go:89] found id: ""
	I1216 04:14:55.602795 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.602804 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:55.602811 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:55.602900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:55.633011 2088124 cri.go:89] found id: ""
	I1216 04:14:55.633035 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.633043 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:55.633049 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:55.633136 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:55.658213 2088124 cri.go:89] found id: ""
	I1216 04:14:55.658247 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.658257 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:55.658280 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:55.658411 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:55.683154 2088124 cri.go:89] found id: ""
	I1216 04:14:55.683183 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.683201 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:55.683208 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:55.683280 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:55.707894 2088124 cri.go:89] found id: ""
	I1216 04:14:55.707968 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.707991 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:55.708010 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:55.708099 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:55.732419 2088124 cri.go:89] found id: ""
	I1216 04:14:55.732506 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.732531 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:55.732543 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:55.732624 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:55.760911 2088124 cri.go:89] found id: ""
	I1216 04:14:55.760981 2088124 logs.go:282] 0 containers: []
	W1216 04:14:55.761007 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:55.761023 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:55.761038 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:55.817437 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:55.817473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:55.833374 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:55.833405 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:55.898151 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:55.890310    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.890838    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892319    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892862    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.894354    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:55.890310    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.890838    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892319    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.892862    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:55.894354    6375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:55.898175 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:55.898195 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:55.923776 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:55.923810 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:14:58.462512 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:14:58.474113 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:14:58.474190 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:14:58.500558 2088124 cri.go:89] found id: ""
	I1216 04:14:58.500581 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.500590 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:14:58.500597 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:14:58.500659 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:14:58.525784 2088124 cri.go:89] found id: ""
	I1216 04:14:58.525809 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.525818 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:14:58.525824 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:14:58.525883 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:14:58.550534 2088124 cri.go:89] found id: ""
	I1216 04:14:58.550560 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.550570 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:14:58.550577 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:14:58.550634 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:14:58.577140 2088124 cri.go:89] found id: ""
	I1216 04:14:58.577167 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.577177 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:14:58.577184 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:14:58.577244 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:14:58.605864 2088124 cri.go:89] found id: ""
	I1216 04:14:58.605890 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.605904 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:14:58.605911 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:14:58.605975 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:14:58.634121 2088124 cri.go:89] found id: ""
	I1216 04:14:58.634152 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.634161 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:14:58.634168 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:14:58.634239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:14:58.660170 2088124 cri.go:89] found id: ""
	I1216 04:14:58.660198 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.660207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:14:58.660213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:14:58.660273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:14:58.685306 2088124 cri.go:89] found id: ""
	I1216 04:14:58.685333 2088124 logs.go:282] 0 containers: []
	W1216 04:14:58.685342 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:14:58.685351 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:14:58.685364 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:14:58.741326 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:14:58.741362 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:14:58.757562 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:14:58.757594 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:14:58.823813 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:14:58.815321    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.815799    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817390    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817823    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.819361    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:14:58.815321    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.815799    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817390    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.817823    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:14:58.819361    6487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:14:58.823838 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:14:58.823854 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:14:58.849684 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:14:58.849722 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:01.379834 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:01.391065 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:01.391142 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:01.417501 2088124 cri.go:89] found id: ""
	I1216 04:15:01.417579 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.417602 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:01.417643 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:01.417737 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:01.448334 2088124 cri.go:89] found id: ""
	I1216 04:15:01.448360 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.448368 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:01.448375 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:01.448447 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:01.476980 2088124 cri.go:89] found id: ""
	I1216 04:15:01.477006 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.477015 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:01.477022 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:01.477108 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:01.501087 2088124 cri.go:89] found id: ""
	I1216 04:15:01.501110 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.501118 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:01.501125 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:01.501183 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:01.526116 2088124 cri.go:89] found id: ""
	I1216 04:15:01.526139 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.526147 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:01.526154 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:01.526217 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:01.552211 2088124 cri.go:89] found id: ""
	I1216 04:15:01.552234 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.552249 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:01.552255 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:01.552314 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:01.579190 2088124 cri.go:89] found id: ""
	I1216 04:15:01.579220 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.579229 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:01.579243 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:01.579362 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:01.606084 2088124 cri.go:89] found id: ""
	I1216 04:15:01.606108 2088124 logs.go:282] 0 containers: []
	W1216 04:15:01.606118 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:01.606127 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:01.606139 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:01.638251 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:01.638281 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:01.698103 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:01.698145 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:01.714771 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:01.714858 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:01.780079 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:01.771727    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.772399    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774006    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774479    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.776016    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:01.771727    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.772399    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774006    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.774479    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:01.776016    6612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:01.780150 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:01.780177 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:04.307354 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:04.318980 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:04.319082 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:04.348465 2088124 cri.go:89] found id: ""
	I1216 04:15:04.348496 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.348506 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:04.348513 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:04.348593 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:04.374442 2088124 cri.go:89] found id: ""
	I1216 04:15:04.374467 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.374476 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:04.374485 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:04.374543 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:04.401352 2088124 cri.go:89] found id: ""
	I1216 04:15:04.401376 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.401384 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:04.401390 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:04.401448 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:04.427946 2088124 cri.go:89] found id: ""
	I1216 04:15:04.427969 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.427978 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:04.427984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:04.428044 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:04.453439 2088124 cri.go:89] found id: ""
	I1216 04:15:04.453474 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.453483 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:04.453490 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:04.453549 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:04.478368 2088124 cri.go:89] found id: ""
	I1216 04:15:04.478395 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.478403 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:04.478409 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:04.478467 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:04.502274 2088124 cri.go:89] found id: ""
	I1216 04:15:04.502303 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.502312 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:04.502318 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:04.502379 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:04.526440 2088124 cri.go:89] found id: ""
	I1216 04:15:04.526467 2088124 logs.go:282] 0 containers: []
	W1216 04:15:04.526475 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:04.526484 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:04.526494 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:04.581559 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:04.581596 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:04.597786 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:04.597815 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:04.661194 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:04.653077    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.653620    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655229    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655719    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.657352    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:04.653077    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.653620    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655229    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.655719    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:04.657352    6716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:04.661217 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:04.661230 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:04.686508 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:04.686544 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:07.214226 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:07.226828 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:07.226904 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:07.267774 2088124 cri.go:89] found id: ""
	I1216 04:15:07.267805 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.267814 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:07.267820 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:07.267880 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:07.293953 2088124 cri.go:89] found id: ""
	I1216 04:15:07.293980 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.293988 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:07.293994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:07.294052 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:07.317542 2088124 cri.go:89] found id: ""
	I1216 04:15:07.317568 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.317577 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:07.317583 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:07.317695 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:07.351422 2088124 cri.go:89] found id: ""
	I1216 04:15:07.351449 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.351458 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:07.351465 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:07.351552 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:07.376043 2088124 cri.go:89] found id: ""
	I1216 04:15:07.376069 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.376092 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:07.376121 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:07.376204 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:07.400719 2088124 cri.go:89] found id: ""
	I1216 04:15:07.400749 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.400758 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:07.400765 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:07.400849 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:07.425726 2088124 cri.go:89] found id: ""
	I1216 04:15:07.425754 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.425763 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:07.425769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:07.425833 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:07.450385 2088124 cri.go:89] found id: ""
	I1216 04:15:07.450413 2088124 logs.go:282] 0 containers: []
	W1216 04:15:07.450422 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:07.450431 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:07.450444 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:07.482416 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:07.482446 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:07.543525 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:07.543569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:07.559963 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:07.559991 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:07.626193 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:07.617713    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.618478    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620010    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620349    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.621831    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:07.617713    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.618478    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620010    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.620349    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:07.621831    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:07.626217 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:07.626233 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:10.151663 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:10.162850 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:10.162922 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:10.218459 2088124 cri.go:89] found id: ""
	I1216 04:15:10.218492 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.218502 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:10.218508 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:10.218581 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:10.266687 2088124 cri.go:89] found id: ""
	I1216 04:15:10.266716 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.266726 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:10.266732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:10.266794 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:10.297579 2088124 cri.go:89] found id: ""
	I1216 04:15:10.297607 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.297616 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:10.297623 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:10.297682 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:10.327612 2088124 cri.go:89] found id: ""
	I1216 04:15:10.327637 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.327646 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:10.327652 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:10.327710 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:10.352049 2088124 cri.go:89] found id: ""
	I1216 04:15:10.352073 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.352082 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:10.352088 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:10.352150 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:10.380981 2088124 cri.go:89] found id: ""
	I1216 04:15:10.381005 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.381013 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:10.381020 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:10.381083 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:10.405173 2088124 cri.go:89] found id: ""
	I1216 04:15:10.405198 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.405207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:10.405213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:10.405271 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:10.430194 2088124 cri.go:89] found id: ""
	I1216 04:15:10.430219 2088124 logs.go:282] 0 containers: []
	W1216 04:15:10.430248 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:10.430259 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:10.430272 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:10.486344 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:10.486381 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:10.502248 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:10.502278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:10.568856 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:10.561184    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.561736    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563232    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563538    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.565000    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:10.561184    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.561736    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563232    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.563538    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:10.565000    6941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:10.568879 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:10.568893 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:10.595314 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:10.595349 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:13.125478 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:13.136862 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:13.136937 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:13.162398 2088124 cri.go:89] found id: ""
	I1216 04:15:13.162432 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.162442 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:13.162449 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:13.162512 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:13.213417 2088124 cri.go:89] found id: ""
	I1216 04:15:13.213443 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.213451 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:13.213457 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:13.213515 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:13.265047 2088124 cri.go:89] found id: ""
	I1216 04:15:13.265074 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.265082 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:13.265089 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:13.265146 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:13.295404 2088124 cri.go:89] found id: ""
	I1216 04:15:13.295431 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.295442 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:13.295448 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:13.295510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:13.320244 2088124 cri.go:89] found id: ""
	I1216 04:15:13.320272 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.320281 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:13.320288 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:13.320347 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:13.343989 2088124 cri.go:89] found id: ""
	I1216 04:15:13.344013 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.344022 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:13.344028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:13.344088 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:13.367813 2088124 cri.go:89] found id: ""
	I1216 04:15:13.367838 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.367847 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:13.367854 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:13.367914 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:13.391747 2088124 cri.go:89] found id: ""
	I1216 04:15:13.391772 2088124 logs.go:282] 0 containers: []
	W1216 04:15:13.391782 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:13.391791 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:13.391802 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:13.416337 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:13.416373 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:13.443257 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:13.443286 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:13.501977 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:13.502016 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:13.517698 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:13.517730 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:13.580974 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:13.572384    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.573100    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.574739    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.575073    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.576736    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:13.572384    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.573100    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.574739    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.575073    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:13.576736    7067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:16.081274 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:16.092248 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:16.092325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:16.118104 2088124 cri.go:89] found id: ""
	I1216 04:15:16.118128 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.118138 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:16.118145 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:16.118207 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:16.148494 2088124 cri.go:89] found id: ""
	I1216 04:15:16.148519 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.148529 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:16.148535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:16.148600 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:16.177106 2088124 cri.go:89] found id: ""
	I1216 04:15:16.177133 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.177142 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:16.177148 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:16.177209 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:16.225478 2088124 cri.go:89] found id: ""
	I1216 04:15:16.225512 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.225521 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:16.225528 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:16.225601 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:16.263615 2088124 cri.go:89] found id: ""
	I1216 04:15:16.263642 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.263651 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:16.263657 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:16.263717 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:16.288816 2088124 cri.go:89] found id: ""
	I1216 04:15:16.288840 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.288849 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:16.288855 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:16.288915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:16.313866 2088124 cri.go:89] found id: ""
	I1216 04:15:16.313899 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.313909 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:16.313915 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:16.313986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:16.338822 2088124 cri.go:89] found id: ""
	I1216 04:15:16.338847 2088124 logs.go:282] 0 containers: []
	W1216 04:15:16.338865 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:16.338874 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:16.338886 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:16.397500 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:16.397535 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:16.413373 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:16.413401 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:16.481369 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:16.473361    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.474033    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475539    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475954    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.477408    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:16.473361    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.474033    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475539    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.475954    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:16.477408    7165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:16.481391 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:16.481404 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:16.506768 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:16.506801 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:19.036905 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:19.047523 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:19.047594 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:19.071924 2088124 cri.go:89] found id: ""
	I1216 04:15:19.071947 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.071956 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:19.071963 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:19.072020 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:19.096694 2088124 cri.go:89] found id: ""
	I1216 04:15:19.096716 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.096736 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:19.096742 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:19.096808 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:19.122106 2088124 cri.go:89] found id: ""
	I1216 04:15:19.122129 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.122137 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:19.122144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:19.122204 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:19.151300 2088124 cri.go:89] found id: ""
	I1216 04:15:19.151327 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.151337 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:19.151346 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:19.151407 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:19.176879 2088124 cri.go:89] found id: ""
	I1216 04:15:19.176906 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.176915 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:19.176921 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:19.176982 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:19.248606 2088124 cri.go:89] found id: ""
	I1216 04:15:19.248637 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.248646 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:19.248654 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:19.248720 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:19.284067 2088124 cri.go:89] found id: ""
	I1216 04:15:19.284095 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.284105 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:19.284111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:19.284179 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:19.309536 2088124 cri.go:89] found id: ""
	I1216 04:15:19.309564 2088124 logs.go:282] 0 containers: []
	W1216 04:15:19.309573 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:19.309583 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:19.309595 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:19.336019 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:19.336059 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:19.363926 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:19.363997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:19.420745 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:19.420779 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:19.437274 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:19.437306 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:19.501939 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:19.493862    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.494691    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496168    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496603    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.498063    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:19.493862    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.494691    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496168    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.496603    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:19.498063    7294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:22.002831 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:22.019000 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:22.019099 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:22.045729 2088124 cri.go:89] found id: ""
	I1216 04:15:22.045753 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.045762 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:22.045769 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:22.045831 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:22.073468 2088124 cri.go:89] found id: ""
	I1216 04:15:22.073494 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.073504 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:22.073511 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:22.073572 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:22.099372 2088124 cri.go:89] found id: ""
	I1216 04:15:22.099397 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.099407 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:22.099413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:22.099475 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:22.124283 2088124 cri.go:89] found id: ""
	I1216 04:15:22.124358 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.124371 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:22.124378 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:22.124509 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:22.149430 2088124 cri.go:89] found id: ""
	I1216 04:15:22.149456 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.149466 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:22.149472 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:22.149532 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:22.179789 2088124 cri.go:89] found id: ""
	I1216 04:15:22.179813 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.179822 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:22.179829 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:22.179920 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:22.233299 2088124 cri.go:89] found id: ""
	I1216 04:15:22.233333 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.233342 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:22.233380 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:22.233495 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:22.281260 2088124 cri.go:89] found id: ""
	I1216 04:15:22.281287 2088124 logs.go:282] 0 containers: []
	W1216 04:15:22.281296 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:22.281305 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:22.281354 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:22.299880 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:22.299908 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:22.370389 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:22.359272    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.360819    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.361789    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.363665    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.365341    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:22.359272    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.360819    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.361789    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.363665    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:22.365341    7389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:22.370413 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:22.370427 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:22.395585 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:22.395618 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:22.423071 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:22.423103 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:24.979909 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:24.990414 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:24.990487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:25.022894 2088124 cri.go:89] found id: ""
	I1216 04:15:25.022933 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.022942 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:25.022950 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:25.023035 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:25.057555 2088124 cri.go:89] found id: ""
	I1216 04:15:25.057592 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.057602 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:25.057609 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:25.057674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:25.084421 2088124 cri.go:89] found id: ""
	I1216 04:15:25.084446 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.084455 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:25.084462 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:25.084534 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:25.112223 2088124 cri.go:89] found id: ""
	I1216 04:15:25.112249 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.112258 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:25.112266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:25.112340 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:25.138162 2088124 cri.go:89] found id: ""
	I1216 04:15:25.138186 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.138195 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:25.138202 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:25.138262 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:25.165660 2088124 cri.go:89] found id: ""
	I1216 04:15:25.165689 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.165698 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:25.165705 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:25.165775 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:25.213233 2088124 cri.go:89] found id: ""
	I1216 04:15:25.213260 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.213269 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:25.213275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:25.213333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:25.254540 2088124 cri.go:89] found id: ""
	I1216 04:15:25.254567 2088124 logs.go:282] 0 containers: []
	W1216 04:15:25.254576 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:25.254586 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:25.254599 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:25.290970 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:25.290997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:25.349010 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:25.349046 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:25.364592 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:25.364626 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:25.428643 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:25.420082    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.420822    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.422550    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.423262    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.424839    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:25.420082    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.420822    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.422550    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.423262    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:25.424839    7514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:25.428666 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:25.428680 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:27.954878 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:27.965363 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:27.965430 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:27.991310 2088124 cri.go:89] found id: ""
	I1216 04:15:27.991338 2088124 logs.go:282] 0 containers: []
	W1216 04:15:27.991347 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:27.991354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:27.991416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:28.017496 2088124 cri.go:89] found id: ""
	I1216 04:15:28.017519 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.017528 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:28.017535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:28.017600 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:28.043243 2088124 cri.go:89] found id: ""
	I1216 04:15:28.043267 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.043276 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:28.043282 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:28.043349 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:28.070592 2088124 cri.go:89] found id: ""
	I1216 04:15:28.070620 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.070629 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:28.070635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:28.070705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:28.096408 2088124 cri.go:89] found id: ""
	I1216 04:15:28.096430 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.096439 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:28.096446 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:28.096517 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:28.122523 2088124 cri.go:89] found id: ""
	I1216 04:15:28.122547 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.122556 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:28.122563 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:28.122627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:28.148233 2088124 cri.go:89] found id: ""
	I1216 04:15:28.148256 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.148264 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:28.148270 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:28.148335 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:28.174686 2088124 cri.go:89] found id: ""
	I1216 04:15:28.174715 2088124 logs.go:282] 0 containers: []
	W1216 04:15:28.174724 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:28.174733 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:28.174745 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:28.248922 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:28.249042 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:28.270319 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:28.270345 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:28.344544 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:28.335802    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.336388    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338081    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338450    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.339995    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:28.335802    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.336388    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338081    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.338450    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:28.339995    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:28.344568 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:28.344583 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:28.370869 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:28.370905 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:30.901180 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:30.914236 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:30.914316 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:30.943226 2088124 cri.go:89] found id: ""
	I1216 04:15:30.943247 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.943255 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:30.943262 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:30.943320 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:30.969548 2088124 cri.go:89] found id: ""
	I1216 04:15:30.969573 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.969581 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:30.969588 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:30.969648 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:30.996727 2088124 cri.go:89] found id: ""
	I1216 04:15:30.996750 2088124 logs.go:282] 0 containers: []
	W1216 04:15:30.996759 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:30.996765 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:30.996823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:31.023099 2088124 cri.go:89] found id: ""
	I1216 04:15:31.023125 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.023133 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:31.023140 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:31.023202 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:31.052543 2088124 cri.go:89] found id: ""
	I1216 04:15:31.052568 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.052577 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:31.052584 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:31.052646 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:31.079096 2088124 cri.go:89] found id: ""
	I1216 04:15:31.079119 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.079128 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:31.079134 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:31.079197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:31.108706 2088124 cri.go:89] found id: ""
	I1216 04:15:31.108777 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.108801 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:31.108815 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:31.108894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:31.138097 2088124 cri.go:89] found id: ""
	I1216 04:15:31.138122 2088124 logs.go:282] 0 containers: []
	W1216 04:15:31.138130 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:31.138140 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:31.138152 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:31.163977 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:31.164066 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:31.220358 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:31.220432 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:31.291830 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:31.291912 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:31.307651 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:31.307678 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:31.376724 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:31.367014    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369142    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369906    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371443    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371922    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:31.367014    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369142    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.369906    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371443    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:31.371922    7746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:33.876969 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:33.887678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:33.887751 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:33.911475 2088124 cri.go:89] found id: ""
	I1216 04:15:33.911503 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.911513 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:33.911520 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:33.911581 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:33.936829 2088124 cri.go:89] found id: ""
	I1216 04:15:33.936852 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.936861 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:33.936866 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:33.936924 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:33.961061 2088124 cri.go:89] found id: ""
	I1216 04:15:33.961085 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.961094 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:33.961101 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:33.961168 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:33.985053 2088124 cri.go:89] found id: ""
	I1216 04:15:33.985078 2088124 logs.go:282] 0 containers: []
	W1216 04:15:33.985086 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:33.985093 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:33.985154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:34.015083 2088124 cri.go:89] found id: ""
	I1216 04:15:34.015112 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.015122 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:34.015129 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:34.015191 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:34.040899 2088124 cri.go:89] found id: ""
	I1216 04:15:34.040922 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.040930 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:34.040936 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:34.041001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:34.066663 2088124 cri.go:89] found id: ""
	I1216 04:15:34.066744 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.066771 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:34.066792 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:34.066877 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:34.092631 2088124 cri.go:89] found id: ""
	I1216 04:15:34.092708 2088124 logs.go:282] 0 containers: []
	W1216 04:15:34.092733 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:34.092749 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:34.092762 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:34.151180 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:34.151218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:34.167672 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:34.167704 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:34.288358 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:34.277084    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.277708    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.280379    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282393    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282865    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:34.277084    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.277708    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.280379    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282393    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:34.282865    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:34.288382 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:34.288395 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:34.313627 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:34.313660 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:36.841874 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:36.852005 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:36.852078 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:36.875574 2088124 cri.go:89] found id: ""
	I1216 04:15:36.875598 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.875608 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:36.875614 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:36.875674 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:36.904945 2088124 cri.go:89] found id: ""
	I1216 04:15:36.905021 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.905045 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:36.905057 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:36.905119 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:36.930221 2088124 cri.go:89] found id: ""
	I1216 04:15:36.930249 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.930259 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:36.930266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:36.930326 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:36.955843 2088124 cri.go:89] found id: ""
	I1216 04:15:36.955870 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.955880 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:36.955887 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:36.955947 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:36.979492 2088124 cri.go:89] found id: ""
	I1216 04:15:36.979557 2088124 logs.go:282] 0 containers: []
	W1216 04:15:36.979583 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:36.979596 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:36.979667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:37.004015 2088124 cri.go:89] found id: ""
	I1216 04:15:37.004045 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.004056 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:37.004064 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:37.004144 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:37.033766 2088124 cri.go:89] found id: ""
	I1216 04:15:37.033841 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.033868 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:37.033887 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:37.033980 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:37.058994 2088124 cri.go:89] found id: ""
	I1216 04:15:37.059087 2088124 logs.go:282] 0 containers: []
	W1216 04:15:37.059115 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:37.059132 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:37.059146 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:37.121921 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:37.113226    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.113740    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115405    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115894    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.117543    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:37.113226    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.113740    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115405    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.115894    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:37.117543    7946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:37.121943 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:37.121956 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:37.148246 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:37.148285 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:37.178974 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:37.179077 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:37.249870 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:37.249909 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:39.789446 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:39.800133 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:39.800214 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:39.824765 2088124 cri.go:89] found id: ""
	I1216 04:15:39.824794 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.824803 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:39.824810 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:39.824872 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:39.849338 2088124 cri.go:89] found id: ""
	I1216 04:15:39.849362 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.849370 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:39.849377 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:39.849435 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:39.873874 2088124 cri.go:89] found id: ""
	I1216 04:15:39.873902 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.873911 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:39.873917 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:39.873976 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:39.899109 2088124 cri.go:89] found id: ""
	I1216 04:15:39.899134 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.899143 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:39.899149 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:39.899210 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:39.924102 2088124 cri.go:89] found id: ""
	I1216 04:15:39.924128 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.924137 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:39.924143 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:39.924208 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:39.949033 2088124 cri.go:89] found id: ""
	I1216 04:15:39.949065 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.949074 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:39.949082 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:39.949144 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:39.975169 2088124 cri.go:89] found id: ""
	I1216 04:15:39.975198 2088124 logs.go:282] 0 containers: []
	W1216 04:15:39.975207 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:39.975213 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:39.975273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:40.028056 2088124 cri.go:89] found id: ""
	I1216 04:15:40.028085 2088124 logs.go:282] 0 containers: []
	W1216 04:15:40.028094 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:40.028104 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:40.028116 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:40.085250 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:40.085285 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:40.101589 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:40.101621 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:40.174562 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:40.165816    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.166569    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.167348    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.168955    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.169429    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:40.165816    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.166569    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.167348    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.168955    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:40.169429    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:40.174584 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:40.174599 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:40.202884 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:40.202920 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:42.752364 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:42.763300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:42.763369 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:42.792503 2088124 cri.go:89] found id: ""
	I1216 04:15:42.792529 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.792539 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:42.792545 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:42.792608 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:42.821201 2088124 cri.go:89] found id: ""
	I1216 04:15:42.821226 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.821235 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:42.821242 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:42.821304 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:42.847075 2088124 cri.go:89] found id: ""
	I1216 04:15:42.847102 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.847110 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:42.847117 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:42.847179 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:42.871486 2088124 cri.go:89] found id: ""
	I1216 04:15:42.871510 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.871519 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:42.871525 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:42.871589 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:42.896375 2088124 cri.go:89] found id: ""
	I1216 04:15:42.896402 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.896412 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:42.896418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:42.896505 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:42.921735 2088124 cri.go:89] found id: ""
	I1216 04:15:42.921811 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.921844 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:42.921865 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:42.921950 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:42.950925 2088124 cri.go:89] found id: ""
	I1216 04:15:42.950947 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.950955 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:42.950961 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:42.951019 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:42.975785 2088124 cri.go:89] found id: ""
	I1216 04:15:42.975809 2088124 logs.go:282] 0 containers: []
	W1216 04:15:42.975817 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:42.975826 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:42.975840 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:42.991441 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:42.991473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:43.054494 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:43.046352    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.047054    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.048590    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.049073    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.050630    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:43.046352    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.047054    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.048590    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.049073    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:43.050630    8173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:43.054518 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:43.054532 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:43.079941 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:43.079979 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:43.107712 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:43.107738 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:45.663276 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:45.674206 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:45.674325 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:45.698711 2088124 cri.go:89] found id: ""
	I1216 04:15:45.698736 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.698745 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:45.698752 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:45.698822 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:45.723389 2088124 cri.go:89] found id: ""
	I1216 04:15:45.723413 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.723422 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:45.723428 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:45.723494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:45.748842 2088124 cri.go:89] found id: ""
	I1216 04:15:45.748919 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.748935 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:45.748942 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:45.749002 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:45.777156 2088124 cri.go:89] found id: ""
	I1216 04:15:45.777236 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.777251 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:45.777265 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:45.777327 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:45.802462 2088124 cri.go:89] found id: ""
	I1216 04:15:45.802494 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.802503 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:45.802510 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:45.802583 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:45.829417 2088124 cri.go:89] found id: ""
	I1216 04:15:45.829442 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.829451 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:45.829458 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:45.829521 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:45.854934 2088124 cri.go:89] found id: ""
	I1216 04:15:45.854962 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.854971 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:45.854977 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:45.855095 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:45.879249 2088124 cri.go:89] found id: ""
	I1216 04:15:45.879272 2088124 logs.go:282] 0 containers: []
	W1216 04:15:45.879280 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:45.879289 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:45.879301 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:45.895118 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:45.895155 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:45.958262 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:45.949768    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.950592    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952181    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952681    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.954304    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:45.949768    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.950592    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952181    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.952681    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:45.954304    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:45.958284 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:45.958298 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:45.984226 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:45.984260 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:46.015984 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:46.016011 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:48.576053 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:48.586849 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:48.586923 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:48.612368 2088124 cri.go:89] found id: ""
	I1216 04:15:48.612394 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.612404 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:48.612410 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:48.612470 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:48.641259 2088124 cri.go:89] found id: ""
	I1216 04:15:48.641288 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.641297 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:48.641304 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:48.641368 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:48.665587 2088124 cri.go:89] found id: ""
	I1216 04:15:48.665614 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.665624 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:48.665629 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:48.665704 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:48.691123 2088124 cri.go:89] found id: ""
	I1216 04:15:48.691151 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.691160 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:48.691167 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:48.691227 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:48.716275 2088124 cri.go:89] found id: ""
	I1216 04:15:48.716304 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.716314 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:48.716320 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:48.716381 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:48.747209 2088124 cri.go:89] found id: ""
	I1216 04:15:48.747236 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.747244 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:48.747250 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:48.747312 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:48.776967 2088124 cri.go:89] found id: ""
	I1216 04:15:48.776991 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.777001 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:48.777010 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:48.777071 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:48.800940 2088124 cri.go:89] found id: ""
	I1216 04:15:48.800965 2088124 logs.go:282] 0 containers: []
	W1216 04:15:48.800975 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:48.800985 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:48.800997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:48.856499 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:48.856533 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:48.872208 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:48.872239 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:48.945493 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:48.936737    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.937621    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939381    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939979    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.941612    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:48.936737    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.937621    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939381    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.939979    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:48.941612    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:48.945516 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:48.945529 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:48.970477 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:48.970510 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:51.499166 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:51.515506 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:51.515579 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:51.540271 2088124 cri.go:89] found id: ""
	I1216 04:15:51.540297 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.540306 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:51.540313 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:51.540373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:51.564213 2088124 cri.go:89] found id: ""
	I1216 04:15:51.564235 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.564244 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:51.564250 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:51.564309 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:51.592901 2088124 cri.go:89] found id: ""
	I1216 04:15:51.592924 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.592933 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:51.592939 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:51.593001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:51.617803 2088124 cri.go:89] found id: ""
	I1216 04:15:51.617831 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.617840 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:51.617847 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:51.617906 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:51.643791 2088124 cri.go:89] found id: ""
	I1216 04:15:51.643814 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.643822 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:51.643830 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:51.643894 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:51.669293 2088124 cri.go:89] found id: ""
	I1216 04:15:51.669324 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.669335 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:51.669345 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:51.669416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:51.697129 2088124 cri.go:89] found id: ""
	I1216 04:15:51.697155 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.697164 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:51.697170 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:51.697235 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:51.725605 2088124 cri.go:89] found id: ""
	I1216 04:15:51.725631 2088124 logs.go:282] 0 containers: []
	W1216 04:15:51.725640 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:51.725650 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:51.725664 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:51.781941 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:51.781976 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:51.798346 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:51.798372 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:51.861456 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:51.853947    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.854888    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.855930    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.856728    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.857532    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:51.853947    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.854888    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.855930    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.856728    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:51.857532    8517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:51.861478 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:51.861491 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:51.886476 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:51.886511 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:54.421185 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:54.432641 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:54.432721 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:54.487902 2088124 cri.go:89] found id: ""
	I1216 04:15:54.487936 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.487945 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:54.487952 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:54.488026 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:54.530347 2088124 cri.go:89] found id: ""
	I1216 04:15:54.530372 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.530381 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:54.530387 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:54.530450 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:54.558305 2088124 cri.go:89] found id: ""
	I1216 04:15:54.558339 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.558348 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:54.558354 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:54.558423 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:54.584247 2088124 cri.go:89] found id: ""
	I1216 04:15:54.584271 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.584280 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:54.584286 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:54.584347 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:54.608497 2088124 cri.go:89] found id: ""
	I1216 04:15:54.608526 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.608536 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:54.608542 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:54.608601 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:54.634256 2088124 cri.go:89] found id: ""
	I1216 04:15:54.634283 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.634293 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:54.634301 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:54.634360 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:54.659092 2088124 cri.go:89] found id: ""
	I1216 04:15:54.659132 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.659141 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:54.659148 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:54.659210 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:54.683797 2088124 cri.go:89] found id: ""
	I1216 04:15:54.683823 2088124 logs.go:282] 0 containers: []
	W1216 04:15:54.683832 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:54.683841 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:54.683852 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:15:54.713212 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:54.713238 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:54.769163 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:54.769199 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:54.784702 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:54.784742 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:54.855379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:54.846290    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.847296    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.848173    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.849670    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.850187    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:54.846290    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.847296    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.848173    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.849670    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:54.850187    8642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:54.855412 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:54.855425 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:57.382388 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:15:57.393144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:15:57.393234 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:15:57.418375 2088124 cri.go:89] found id: ""
	I1216 04:15:57.418443 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.418467 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:15:57.418486 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:15:57.418574 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:15:57.495590 2088124 cri.go:89] found id: ""
	I1216 04:15:57.495668 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.495694 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:15:57.495716 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:15:57.495813 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:15:57.536762 2088124 cri.go:89] found id: ""
	I1216 04:15:57.536786 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.536795 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:15:57.536801 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:15:57.536859 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:15:57.573379 2088124 cri.go:89] found id: ""
	I1216 04:15:57.573403 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.573412 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:15:57.573418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:15:57.573488 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:15:57.601415 2088124 cri.go:89] found id: ""
	I1216 04:15:57.601439 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.601447 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:15:57.601454 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:15:57.601514 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:15:57.625828 2088124 cri.go:89] found id: ""
	I1216 04:15:57.625852 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.625860 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:15:57.625866 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:15:57.625932 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:15:57.651508 2088124 cri.go:89] found id: ""
	I1216 04:15:57.651534 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.651543 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:15:57.651549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:15:57.651609 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:15:57.678194 2088124 cri.go:89] found id: ""
	I1216 04:15:57.678228 2088124 logs.go:282] 0 containers: []
	W1216 04:15:57.678242 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:15:57.678252 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:15:57.678287 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:15:57.733879 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:15:57.733916 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:15:57.750633 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:15:57.750661 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:15:57.828100 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:15:57.813970    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.814594    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.815982    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.822718    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.823836    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:15:57.813970    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.814594    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.815982    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.822718    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:15:57.823836    8743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:15:57.828131 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:15:57.828145 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:15:57.855013 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:15:57.855070 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:00.384284 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:00.398189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:00.398285 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:00.442307 2088124 cri.go:89] found id: ""
	I1216 04:16:00.442337 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.442347 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:00.442404 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:00.442487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:00.505962 2088124 cri.go:89] found id: ""
	I1216 04:16:00.505986 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.505994 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:00.506001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:00.506064 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:00.548862 2088124 cri.go:89] found id: ""
	I1216 04:16:00.548940 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.548965 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:00.548984 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:00.549098 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:00.576916 2088124 cri.go:89] found id: ""
	I1216 04:16:00.576939 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.576948 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:00.576954 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:00.577013 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:00.602863 2088124 cri.go:89] found id: ""
	I1216 04:16:00.602891 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.602901 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:00.602907 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:00.602971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:00.628659 2088124 cri.go:89] found id: ""
	I1216 04:16:00.628688 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.628698 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:00.628705 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:00.628771 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:00.654429 2088124 cri.go:89] found id: ""
	I1216 04:16:00.654466 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.654475 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:00.654481 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:00.654556 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:00.679835 2088124 cri.go:89] found id: ""
	I1216 04:16:00.679863 2088124 logs.go:282] 0 containers: []
	W1216 04:16:00.679877 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:00.679890 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:00.679901 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:00.738456 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:00.738501 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:00.754802 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:00.754838 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:00.824660 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:00.815557    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.816379    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818197    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818825    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.820610    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:00.815557    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.816379    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818197    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.818825    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:00.820610    8855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:00.824683 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:00.824698 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:00.850142 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:00.850176 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:03.377190 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:03.388732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:03.388827 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:03.417059 2088124 cri.go:89] found id: ""
	I1216 04:16:03.417082 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.417090 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:03.417096 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:03.417157 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:03.473568 2088124 cri.go:89] found id: ""
	I1216 04:16:03.473591 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.473599 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:03.473605 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:03.473676 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:03.510076 2088124 cri.go:89] found id: ""
	I1216 04:16:03.510097 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.510105 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:03.510111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:03.510170 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:03.546041 2088124 cri.go:89] found id: ""
	I1216 04:16:03.546063 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.546072 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:03.546086 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:03.546148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:03.574587 2088124 cri.go:89] found id: ""
	I1216 04:16:03.574672 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.574704 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:03.574747 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:03.574847 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:03.600940 2088124 cri.go:89] found id: ""
	I1216 04:16:03.600964 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.600973 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:03.600979 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:03.601041 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:03.626500 2088124 cri.go:89] found id: ""
	I1216 04:16:03.626524 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.626537 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:03.626544 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:03.626613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:03.651278 2088124 cri.go:89] found id: ""
	I1216 04:16:03.651345 2088124 logs.go:282] 0 containers: []
	W1216 04:16:03.651368 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:03.651386 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:03.651401 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:03.713437 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:03.704982    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.705525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707260    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707865    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.709525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:03.704982    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.705525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707260    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.707865    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:03.709525    8961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:03.713461 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:03.713476 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:03.739122 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:03.739183 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:03.769731 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:03.769761 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:03.825343 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:03.825379 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:06.341217 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:06.351622 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:06.351695 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:06.377192 2088124 cri.go:89] found id: ""
	I1216 04:16:06.377220 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.377229 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:06.377236 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:06.377298 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:06.407491 2088124 cri.go:89] found id: ""
	I1216 04:16:06.407516 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.407524 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:06.407530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:06.407587 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:06.432854 2088124 cri.go:89] found id: ""
	I1216 04:16:06.432881 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.432890 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:06.432896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:06.432954 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:06.508461 2088124 cri.go:89] found id: ""
	I1216 04:16:06.508483 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.508502 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:06.508510 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:06.508572 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:06.537008 2088124 cri.go:89] found id: ""
	I1216 04:16:06.537031 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.537039 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:06.537045 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:06.537102 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:06.563652 2088124 cri.go:89] found id: ""
	I1216 04:16:06.563723 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.563740 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:06.563747 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:06.563841 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:06.589523 2088124 cri.go:89] found id: ""
	I1216 04:16:06.589599 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.589623 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:06.589642 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:06.589725 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:06.615510 2088124 cri.go:89] found id: ""
	I1216 04:16:06.615577 2088124 logs.go:282] 0 containers: []
	W1216 04:16:06.615599 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:06.615623 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:06.615655 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:06.670726 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:06.670760 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:06.689463 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:06.689495 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:06.755339 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:06.747246    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.748070    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.749790    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.750091    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.751596    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:06.747246    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.748070    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.749790    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.750091    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:06.751596    9078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:06.755362 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:06.755375 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:06.780884 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:06.780917 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:09.313406 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:09.323603 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:09.323673 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:09.351600 2088124 cri.go:89] found id: ""
	I1216 04:16:09.351624 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.351632 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:09.351639 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:09.351699 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:09.375845 2088124 cri.go:89] found id: ""
	I1216 04:16:09.375869 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.375878 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:09.375885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:09.375950 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:09.400733 2088124 cri.go:89] found id: ""
	I1216 04:16:09.400756 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.400764 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:09.400770 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:09.400830 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:09.423762 2088124 cri.go:89] found id: ""
	I1216 04:16:09.423785 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.423793 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:09.423799 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:09.423856 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:09.457898 2088124 cri.go:89] found id: ""
	I1216 04:16:09.457971 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.457993 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:09.458014 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:09.458132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:09.507416 2088124 cri.go:89] found id: ""
	I1216 04:16:09.507445 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.507453 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:09.507459 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:09.507518 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:09.542968 2088124 cri.go:89] found id: ""
	I1216 04:16:09.543084 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.543115 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:09.543169 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:09.543294 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:09.568289 2088124 cri.go:89] found id: ""
	I1216 04:16:09.568313 2088124 logs.go:282] 0 containers: []
	W1216 04:16:09.568321 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:09.568331 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:09.568343 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:09.630690 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:09.621816    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.622488    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624212    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624769    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.626372    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:09.621816    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.622488    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624212    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.624769    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:09.626372    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:09.630716 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:09.630732 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:09.656388 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:09.656424 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:09.684126 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:09.684152 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:09.742624 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:09.742662 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:12.259263 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:12.269891 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:12.269959 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:12.294506 2088124 cri.go:89] found id: ""
	I1216 04:16:12.294532 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.294541 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:12.294546 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:12.294628 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:12.318895 2088124 cri.go:89] found id: ""
	I1216 04:16:12.318924 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.318932 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:12.318938 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:12.318994 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:12.344134 2088124 cri.go:89] found id: ""
	I1216 04:16:12.344158 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.344167 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:12.344173 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:12.344234 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:12.368552 2088124 cri.go:89] found id: ""
	I1216 04:16:12.368574 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.368583 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:12.368590 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:12.368654 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:12.396826 2088124 cri.go:89] found id: ""
	I1216 04:16:12.396854 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.396863 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:12.396870 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:12.396931 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:12.422048 2088124 cri.go:89] found id: ""
	I1216 04:16:12.422076 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.422085 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:12.422092 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:12.422153 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:12.485647 2088124 cri.go:89] found id: ""
	I1216 04:16:12.485669 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.485677 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:12.485684 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:12.485750 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:12.529516 2088124 cri.go:89] found id: ""
	I1216 04:16:12.529539 2088124 logs.go:282] 0 containers: []
	W1216 04:16:12.529547 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:12.529557 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:12.529569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:12.545674 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:12.545705 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:12.608192 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:12.599988    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.600547    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602099    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602578    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.604093    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:12.599988    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.600547    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602099    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.602578    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:12.604093    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:12.608257 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:12.608279 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:12.633428 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:12.633463 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:12.661070 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:12.661097 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:15.217877 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:15.228678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:15.228748 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:15.253119 2088124 cri.go:89] found id: ""
	I1216 04:16:15.253143 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.253152 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:15.253158 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:15.253220 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:15.285145 2088124 cri.go:89] found id: ""
	I1216 04:16:15.285168 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.285177 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:15.285183 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:15.285243 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:15.311311 2088124 cri.go:89] found id: ""
	I1216 04:16:15.311339 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.311348 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:15.311355 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:15.311416 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:15.336241 2088124 cri.go:89] found id: ""
	I1216 04:16:15.336271 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.336286 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:15.336293 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:15.336354 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:15.362230 2088124 cri.go:89] found id: ""
	I1216 04:16:15.362258 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.362268 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:15.362275 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:15.362334 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:15.387340 2088124 cri.go:89] found id: ""
	I1216 04:16:15.387362 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.387371 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:15.387377 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:15.387437 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:15.412173 2088124 cri.go:89] found id: ""
	I1216 04:16:15.412201 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.412210 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:15.412217 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:15.412281 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:15.454276 2088124 cri.go:89] found id: ""
	I1216 04:16:15.454354 2088124 logs.go:282] 0 containers: []
	W1216 04:16:15.454378 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:15.454404 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:15.454446 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:15.556767 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:15.556806 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:15.573628 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:15.573670 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:15.638801 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:15.629487    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.630048    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.631845    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.632428    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.634191    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:15.629487    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.630048    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.631845    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.632428    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:15.634191    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:15.638865 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:15.638886 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:15.663907 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:15.663944 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:18.197135 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:18.208099 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:18.208177 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:18.234350 2088124 cri.go:89] found id: ""
	I1216 04:16:18.234379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.234388 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:18.234394 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:18.234459 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:18.258985 2088124 cri.go:89] found id: ""
	I1216 04:16:18.259013 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.259022 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:18.259028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:18.259110 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:18.284132 2088124 cri.go:89] found id: ""
	I1216 04:16:18.284156 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.284164 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:18.284171 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:18.284230 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:18.309961 2088124 cri.go:89] found id: ""
	I1216 04:16:18.309989 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.309997 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:18.310004 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:18.310108 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:18.336186 2088124 cri.go:89] found id: ""
	I1216 04:16:18.336212 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.336221 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:18.336228 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:18.336289 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:18.361829 2088124 cri.go:89] found id: ""
	I1216 04:16:18.361858 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.361867 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:18.361874 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:18.361934 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:18.388363 2088124 cri.go:89] found id: ""
	I1216 04:16:18.388385 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.388394 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:18.388400 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:18.388463 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:18.416963 2088124 cri.go:89] found id: ""
	I1216 04:16:18.416988 2088124 logs.go:282] 0 containers: []
	W1216 04:16:18.416996 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:18.417006 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:18.417018 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:18.500995 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:18.503604 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:18.521452 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:18.521531 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:18.589729 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:18.580618    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.581508    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583296    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583964    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.585797    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:18.580618    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.581508    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583296    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.583964    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:18.585797    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:18.589761 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:18.589775 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:18.616012 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:18.616047 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:21.144794 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:21.155656 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:21.155729 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:21.184379 2088124 cri.go:89] found id: ""
	I1216 04:16:21.184403 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.184411 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:21.184417 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:21.184484 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:21.210137 2088124 cri.go:89] found id: ""
	I1216 04:16:21.210163 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.210172 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:21.210178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:21.210240 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:21.235283 2088124 cri.go:89] found id: ""
	I1216 04:16:21.235307 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.235315 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:21.235321 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:21.235381 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:21.263715 2088124 cri.go:89] found id: ""
	I1216 04:16:21.263738 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.263746 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:21.263753 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:21.263823 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:21.287600 2088124 cri.go:89] found id: ""
	I1216 04:16:21.287624 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.287632 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:21.287638 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:21.287698 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:21.315897 2088124 cri.go:89] found id: ""
	I1216 04:16:21.315919 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.315927 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:21.315934 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:21.315993 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:21.339842 2088124 cri.go:89] found id: ""
	I1216 04:16:21.339866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.339874 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:21.339880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:21.339939 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:21.364501 2088124 cri.go:89] found id: ""
	I1216 04:16:21.364526 2088124 logs.go:282] 0 containers: []
	W1216 04:16:21.364535 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:21.364544 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:21.364556 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:21.379974 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:21.380060 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:21.474639 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:21.442912    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.443871    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.447436    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.448028    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.468365    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:21.442912    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.443871    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.447436    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.448028    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:21.468365    9631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:21.474664 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:21.474676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:21.531857 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:21.531938 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:21.561122 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:21.561149 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:24.116616 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:24.126986 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:24.127075 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:24.154481 2088124 cri.go:89] found id: ""
	I1216 04:16:24.154507 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.154526 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:24.154533 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:24.154591 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:24.180064 2088124 cri.go:89] found id: ""
	I1216 04:16:24.180087 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.180095 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:24.180103 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:24.180165 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:24.205398 2088124 cri.go:89] found id: ""
	I1216 04:16:24.205424 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.205433 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:24.205440 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:24.205499 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:24.230340 2088124 cri.go:89] found id: ""
	I1216 04:16:24.230369 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.230377 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:24.230384 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:24.230445 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:24.255009 2088124 cri.go:89] found id: ""
	I1216 04:16:24.255056 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.255066 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:24.255072 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:24.255131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:24.280187 2088124 cri.go:89] found id: ""
	I1216 04:16:24.280214 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.280224 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:24.280230 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:24.280287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:24.304688 2088124 cri.go:89] found id: ""
	I1216 04:16:24.304711 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.304720 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:24.304726 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:24.304788 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:24.329482 2088124 cri.go:89] found id: ""
	I1216 04:16:24.329505 2088124 logs.go:282] 0 containers: []
	W1216 04:16:24.329514 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:24.329523 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:24.329535 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:24.345077 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:24.345106 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:24.410594 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:24.402842    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.403261    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.404850    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.405187    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.406756    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:24.402842    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.403261    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.404850    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.405187    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:24.406756    9743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:24.410665 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:24.410695 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:24.437142 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:24.437180 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:24.512425 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:24.512454 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:27.075945 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:27.086676 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:27.086751 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:27.111375 2088124 cri.go:89] found id: ""
	I1216 04:16:27.111402 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.111411 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:27.111418 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:27.111479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:27.136068 2088124 cri.go:89] found id: ""
	I1216 04:16:27.136100 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.136109 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:27.136115 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:27.136174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:27.160473 2088124 cri.go:89] found id: ""
	I1216 04:16:27.160503 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.160513 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:27.160519 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:27.160580 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:27.186608 2088124 cri.go:89] found id: ""
	I1216 04:16:27.186632 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.186639 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:27.186646 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:27.186708 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:27.217149 2088124 cri.go:89] found id: ""
	I1216 04:16:27.217173 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.217182 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:27.217189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:27.217253 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:27.243558 2088124 cri.go:89] found id: ""
	I1216 04:16:27.243583 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.243592 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:27.243598 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:27.243665 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:27.269387 2088124 cri.go:89] found id: ""
	I1216 04:16:27.269415 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.269425 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:27.269433 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:27.269494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:27.296709 2088124 cri.go:89] found id: ""
	I1216 04:16:27.296778 2088124 logs.go:282] 0 containers: []
	W1216 04:16:27.296790 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:27.296800 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:27.296811 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:27.327331 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:27.327359 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:27.384171 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:27.384206 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:27.400922 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:27.400958 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:27.528794 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:27.519212    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.519883    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521482    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521988    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.523599    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:27.519212    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.519883    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521482    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.521988    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:27.523599    9869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:27.528819 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:27.528835 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:30.057685 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:30.079715 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:30.079801 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:30.110026 2088124 cri.go:89] found id: ""
	I1216 04:16:30.110054 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.110063 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:30.110076 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:30.110143 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:30.137960 2088124 cri.go:89] found id: ""
	I1216 04:16:30.137986 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.137994 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:30.138001 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:30.138065 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:30.165148 2088124 cri.go:89] found id: ""
	I1216 04:16:30.165177 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.165186 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:30.165194 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:30.165283 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:30.192836 2088124 cri.go:89] found id: ""
	I1216 04:16:30.192866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.192875 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:30.192883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:30.192951 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:30.220187 2088124 cri.go:89] found id: ""
	I1216 04:16:30.220213 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.220227 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:30.220233 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:30.220333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:30.247843 2088124 cri.go:89] found id: ""
	I1216 04:16:30.247872 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.247882 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:30.247889 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:30.247980 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:30.274429 2088124 cri.go:89] found id: ""
	I1216 04:16:30.274454 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.274463 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:30.274470 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:30.274583 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:30.302775 2088124 cri.go:89] found id: ""
	I1216 04:16:30.302809 2088124 logs.go:282] 0 containers: []
	W1216 04:16:30.302819 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:30.302844 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:30.302863 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:30.318968 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:30.318999 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:30.383767 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:30.374814    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.375258    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.376919    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.377544    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.379234    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:30.374814    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.375258    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.376919    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.377544    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:30.379234    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:30.383790 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:30.383804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:30.410095 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:30.410131 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:30.468723 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:30.468804 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:33.056394 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:33.067079 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:33.067155 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:33.092150 2088124 cri.go:89] found id: ""
	I1216 04:16:33.092178 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.092188 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:33.092194 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:33.092260 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:33.117824 2088124 cri.go:89] found id: ""
	I1216 04:16:33.117852 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.117861 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:33.117868 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:33.117927 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:33.143646 2088124 cri.go:89] found id: ""
	I1216 04:16:33.143672 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.143680 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:33.143686 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:33.143744 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:33.169791 2088124 cri.go:89] found id: ""
	I1216 04:16:33.169818 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.169826 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:33.169833 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:33.169893 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:33.194288 2088124 cri.go:89] found id: ""
	I1216 04:16:33.194313 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.194323 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:33.194329 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:33.194388 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:33.221028 2088124 cri.go:89] found id: ""
	I1216 04:16:33.221062 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.221071 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:33.221078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:33.221178 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:33.245742 2088124 cri.go:89] found id: ""
	I1216 04:16:33.245769 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.245778 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:33.245784 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:33.245852 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:33.270847 2088124 cri.go:89] found id: ""
	I1216 04:16:33.270870 2088124 logs.go:282] 0 containers: []
	W1216 04:16:33.270879 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:33.270888 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:33.270899 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:33.327247 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:33.327283 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:33.342917 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:33.342947 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:33.407775 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:33.399278   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.399947   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401468   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401954   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.403432   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:33.399278   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.399947   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401468   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.401954   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:33.403432   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:33.407796 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:33.407809 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:33.433956 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:33.433990 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:36.019705 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:36.031406 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:36.031494 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:36.061621 2088124 cri.go:89] found id: ""
	I1216 04:16:36.061647 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.061657 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:36.061664 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:36.061730 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:36.088137 2088124 cri.go:89] found id: ""
	I1216 04:16:36.088162 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.088171 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:36.088178 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:36.088239 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:36.113810 2088124 cri.go:89] found id: ""
	I1216 04:16:36.113833 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.113842 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:36.113849 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:36.113913 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:36.139840 2088124 cri.go:89] found id: ""
	I1216 04:16:36.139866 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.139874 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:36.139883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:36.139965 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:36.168529 2088124 cri.go:89] found id: ""
	I1216 04:16:36.168553 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.168561 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:36.168567 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:36.168627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:36.196976 2088124 cri.go:89] found id: ""
	I1216 04:16:36.197002 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.197027 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:36.197050 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:36.197133 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:36.221877 2088124 cri.go:89] found id: ""
	I1216 04:16:36.221903 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.221912 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:36.221918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:36.222032 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:36.248921 2088124 cri.go:89] found id: ""
	I1216 04:16:36.248947 2088124 logs.go:282] 0 containers: []
	W1216 04:16:36.248956 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:36.248966 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:36.248977 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:36.264593 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:36.264622 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:36.329217 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:36.319688   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.320663   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.322347   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.323118   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.324854   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:36.319688   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.320663   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.322347   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.323118   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:36.324854   10199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:36.329239 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:36.329252 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:36.354482 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:36.354514 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:36.382824 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:36.382890 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:38.944004 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:38.957491 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:38.957613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:38.982761 2088124 cri.go:89] found id: ""
	I1216 04:16:38.982787 2088124 logs.go:282] 0 containers: []
	W1216 04:16:38.982796 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:38.982803 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:38.982861 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:39.010506 2088124 cri.go:89] found id: ""
	I1216 04:16:39.010532 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.010542 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:39.010549 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:39.010630 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:39.035827 2088124 cri.go:89] found id: ""
	I1216 04:16:39.035853 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.035862 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:39.035875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:39.035934 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:39.060421 2088124 cri.go:89] found id: ""
	I1216 04:16:39.060448 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.060457 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:39.060463 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:39.060550 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:39.087481 2088124 cri.go:89] found id: ""
	I1216 04:16:39.087504 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.087512 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:39.087518 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:39.087577 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:39.111994 2088124 cri.go:89] found id: ""
	I1216 04:16:39.112028 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.112037 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:39.112044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:39.112114 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:39.136060 2088124 cri.go:89] found id: ""
	I1216 04:16:39.136093 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.136101 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:39.136108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:39.136186 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:39.166063 2088124 cri.go:89] found id: ""
	I1216 04:16:39.166090 2088124 logs.go:282] 0 containers: []
	W1216 04:16:39.166099 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:39.166109 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:39.166120 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:39.222912 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:39.222949 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:39.239064 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:39.239096 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:39.305289 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:39.297129   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.297723   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299366   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299828   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.301323   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:39.297129   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.297723   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299366   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.299828   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:39.301323   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:39.305312 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:39.305326 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:39.330965 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:39.330997 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:41.862236 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:41.873016 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:41.873089 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:41.900650 2088124 cri.go:89] found id: ""
	I1216 04:16:41.900675 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.900684 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:41.900691 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:41.900754 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:41.924986 2088124 cri.go:89] found id: ""
	I1216 04:16:41.925012 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.925022 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:41.925028 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:41.925090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:41.950157 2088124 cri.go:89] found id: ""
	I1216 04:16:41.950182 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.950191 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:41.950197 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:41.950257 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:41.975738 2088124 cri.go:89] found id: ""
	I1216 04:16:41.975763 2088124 logs.go:282] 0 containers: []
	W1216 04:16:41.975772 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:41.975778 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:41.975837 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:42.008172 2088124 cri.go:89] found id: ""
	I1216 04:16:42.008203 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.008214 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:42.008221 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:42.008295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:42.036816 2088124 cri.go:89] found id: ""
	I1216 04:16:42.036841 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.036851 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:42.036858 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:42.036969 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:42.066668 2088124 cri.go:89] found id: ""
	I1216 04:16:42.066697 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.066706 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:42.066713 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:42.066787 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:42.098167 2088124 cri.go:89] found id: ""
	I1216 04:16:42.098200 2088124 logs.go:282] 0 containers: []
	W1216 04:16:42.098217 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:42.098231 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:42.098245 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:42.184589 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:42.173723   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.175193   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.176598   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.177621   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.179728   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:42.173723   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.175193   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.176598   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.177621   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:42.179728   10420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:42.184617 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:42.184635 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:42.214306 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:42.214348 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:42.253172 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:42.253203 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:42.312705 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:42.312757 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:44.831426 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:44.842214 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:44.842287 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:44.872804 2088124 cri.go:89] found id: ""
	I1216 04:16:44.872833 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.872843 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:44.872851 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:44.872915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:44.903988 2088124 cri.go:89] found id: ""
	I1216 04:16:44.904064 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.904089 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:44.904108 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:44.904200 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:44.930758 2088124 cri.go:89] found id: ""
	I1216 04:16:44.930837 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.930861 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:44.930880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:44.930971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:44.955785 2088124 cri.go:89] found id: ""
	I1216 04:16:44.955809 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.955817 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:44.955823 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:44.955883 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:44.983685 2088124 cri.go:89] found id: ""
	I1216 04:16:44.983762 2088124 logs.go:282] 0 containers: []
	W1216 04:16:44.983785 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:44.983800 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:44.983876 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:45.034599 2088124 cri.go:89] found id: ""
	I1216 04:16:45.034623 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.034631 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:45.034639 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:45.034713 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:45.106900 2088124 cri.go:89] found id: ""
	I1216 04:16:45.106927 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.106937 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:45.106945 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:45.107019 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:45.148790 2088124 cri.go:89] found id: ""
	I1216 04:16:45.148816 2088124 logs.go:282] 0 containers: []
	W1216 04:16:45.148826 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:45.148837 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:45.148851 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:45.242114 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:45.242166 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:45.275372 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:45.275416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:45.355175 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:45.346532   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.347206   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.348772   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.349213   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.350684   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:45.346532   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.347206   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.348772   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.349213   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:45.350684   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:45.355241 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:45.355263 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:45.382211 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:45.382248 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:47.915609 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:47.927521 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:47.927603 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:47.957165 2088124 cri.go:89] found id: ""
	I1216 04:16:47.957192 2088124 logs.go:282] 0 containers: []
	W1216 04:16:47.957205 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:47.957212 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:47.957278 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:47.983356 2088124 cri.go:89] found id: ""
	I1216 04:16:47.983379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:47.983396 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:47.983408 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:47.983475 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:48.012782 2088124 cri.go:89] found id: ""
	I1216 04:16:48.012807 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.012815 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:48.012822 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:48.012887 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:48.042072 2088124 cri.go:89] found id: ""
	I1216 04:16:48.042096 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.042105 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:48.042111 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:48.042172 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:48.066925 2088124 cri.go:89] found id: ""
	I1216 04:16:48.066954 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.066963 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:48.066970 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:48.067032 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:48.097340 2088124 cri.go:89] found id: ""
	I1216 04:16:48.097366 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.097378 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:48.097385 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:48.097470 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:48.126364 2088124 cri.go:89] found id: ""
	I1216 04:16:48.126397 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.126407 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:48.126413 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:48.126510 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:48.152175 2088124 cri.go:89] found id: ""
	I1216 04:16:48.152199 2088124 logs.go:282] 0 containers: []
	W1216 04:16:48.152207 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:48.152217 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:48.152232 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:48.216814 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:48.216861 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:48.235153 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:48.235187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:48.303336 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:48.295476   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.296105   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.297650   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.298118   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.299616   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:48.295476   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.296105   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.297650   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.298118   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:48.299616   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:48.303404 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:48.303433 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:48.332107 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:48.332175 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:50.863912 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:50.876115 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:50.876205 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:50.902170 2088124 cri.go:89] found id: ""
	I1216 04:16:50.902200 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.902209 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:50.902216 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:50.902273 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:50.925870 2088124 cri.go:89] found id: ""
	I1216 04:16:50.925903 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.925912 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:50.925918 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:50.925986 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:50.950257 2088124 cri.go:89] found id: ""
	I1216 04:16:50.950283 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.950293 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:50.950299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:50.950358 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:50.975507 2088124 cri.go:89] found id: ""
	I1216 04:16:50.975531 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.975541 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:50.975547 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:50.975607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:50.999494 2088124 cri.go:89] found id: ""
	I1216 04:16:50.999520 2088124 logs.go:282] 0 containers: []
	W1216 04:16:50.999529 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:50.999535 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:50.999599 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:51.026658 2088124 cri.go:89] found id: ""
	I1216 04:16:51.026685 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.026694 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:51.026701 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:51.026760 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:51.051749 2088124 cri.go:89] found id: ""
	I1216 04:16:51.051775 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.051784 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:51.051790 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:51.051868 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:51.076898 2088124 cri.go:89] found id: ""
	I1216 04:16:51.076927 2088124 logs.go:282] 0 containers: []
	W1216 04:16:51.076938 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:51.076948 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:51.076960 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:51.103255 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:51.103293 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:51.134833 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:51.134859 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:51.193704 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:51.193741 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:51.212900 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:51.212928 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:51.297351 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:51.288083   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.288656   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.289678   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291273   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291873   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:51.288083   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.288656   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.289678   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291273   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:51.291873   10776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:53.797612 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:53.808331 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:53.808407 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:53.832739 2088124 cri.go:89] found id: ""
	I1216 04:16:53.832807 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.832829 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:53.832850 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:53.832945 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:53.857832 2088124 cri.go:89] found id: ""
	I1216 04:16:53.857869 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.857878 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:53.857885 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:53.857954 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:53.885064 2088124 cri.go:89] found id: ""
	I1216 04:16:53.885087 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.885095 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:53.885101 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:53.885158 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:53.913372 2088124 cri.go:89] found id: ""
	I1216 04:16:53.913451 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.913475 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:53.913493 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:53.913586 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:53.940577 2088124 cri.go:89] found id: ""
	I1216 04:16:53.940646 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.940673 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:53.940687 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:53.940764 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:53.966496 2088124 cri.go:89] found id: ""
	I1216 04:16:53.966534 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.966543 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:53.966552 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:53.966623 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:53.992796 2088124 cri.go:89] found id: ""
	I1216 04:16:53.992820 2088124 logs.go:282] 0 containers: []
	W1216 04:16:53.992828 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:53.992834 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:53.992896 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:54.019752 2088124 cri.go:89] found id: ""
	I1216 04:16:54.019840 2088124 logs.go:282] 0 containers: []
	W1216 04:16:54.019857 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:54.019868 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:54.019880 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:54.079349 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:54.079394 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:54.098509 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:54.098593 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:54.166447 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:54.157535   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.158625   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160232   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160884   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.162506   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:54.157535   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.158625   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160232   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.160884   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:54.162506   10877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:54.166510 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:54.166549 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:54.191683 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:54.191718 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:56.719163 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:56.748538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:56.748613 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:56.786218 2088124 cri.go:89] found id: ""
	I1216 04:16:56.786244 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.786253 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:56.786259 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:56.786320 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:56.812994 2088124 cri.go:89] found id: ""
	I1216 04:16:56.813016 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.813024 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:56.813031 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:56.813090 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:56.841729 2088124 cri.go:89] found id: ""
	I1216 04:16:56.841751 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.841760 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:56.841766 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:56.841825 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:56.870356 2088124 cri.go:89] found id: ""
	I1216 04:16:56.870379 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.870387 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:56.870393 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:56.870451 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:56.899841 2088124 cri.go:89] found id: ""
	I1216 04:16:56.899867 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.899877 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:56.899883 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:56.899943 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:56.924316 2088124 cri.go:89] found id: ""
	I1216 04:16:56.924343 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.924352 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:56.924359 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:56.924417 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:56.948789 2088124 cri.go:89] found id: ""
	I1216 04:16:56.948815 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.948824 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:56.948830 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:56.948891 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:56.977394 2088124 cri.go:89] found id: ""
	I1216 04:16:56.977423 2088124 logs.go:282] 0 containers: []
	W1216 04:16:56.977432 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:56.977441 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:56.977453 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:16:57.032732 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:16:57.032770 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:16:57.048273 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:16:57.048302 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:16:57.115644 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:16:57.106949   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.107590   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109199   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109751   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.111454   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:16:57.106949   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.107590   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109199   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.109751   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:16:57.111454   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:16:57.115665 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:16:57.115685 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:16:57.140936 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:16:57.140971 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:16:59.669285 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:16:59.682343 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:16:59.682415 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:16:59.722722 2088124 cri.go:89] found id: ""
	I1216 04:16:59.722750 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.722758 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:16:59.722764 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:16:59.722824 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:16:59.778634 2088124 cri.go:89] found id: ""
	I1216 04:16:59.778659 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.778667 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:16:59.778674 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:16:59.778733 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:16:59.817378 2088124 cri.go:89] found id: ""
	I1216 04:16:59.817470 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.817498 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:16:59.817538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:16:59.817644 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:16:59.848330 2088124 cri.go:89] found id: ""
	I1216 04:16:59.848356 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.848365 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:16:59.848372 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:16:59.848459 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:16:59.880033 2088124 cri.go:89] found id: ""
	I1216 04:16:59.880061 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.880074 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:16:59.880080 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:16:59.880154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:16:59.909206 2088124 cri.go:89] found id: ""
	I1216 04:16:59.909231 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.909241 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:16:59.909248 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:16:59.909351 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:16:59.934604 2088124 cri.go:89] found id: ""
	I1216 04:16:59.934630 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.934639 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:16:59.934646 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:16:59.934708 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:16:59.959916 2088124 cri.go:89] found id: ""
	I1216 04:16:59.959994 2088124 logs.go:282] 0 containers: []
	W1216 04:16:59.960011 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:16:59.960022 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:16:59.960035 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:00.015911 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:00.016018 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:00.105766 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:00.105818 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:00.319730 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:00.290488   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.291031   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.293087   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.295468   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.296159   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:00.290488   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.291031   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.293087   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.295468   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:00.296159   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:00.319780 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:00.319793 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:00.371509 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:00.371569 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:02.957388 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:02.969075 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:02.969174 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:02.996244 2088124 cri.go:89] found id: ""
	I1216 04:17:02.996268 2088124 logs.go:282] 0 containers: []
	W1216 04:17:02.996276 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:02.996283 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:02.996351 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:03.035674 2088124 cri.go:89] found id: ""
	I1216 04:17:03.035699 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.035709 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:03.035716 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:03.035786 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:03.063231 2088124 cri.go:89] found id: ""
	I1216 04:17:03.063262 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.063271 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:03.063278 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:03.063348 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:03.090248 2088124 cri.go:89] found id: ""
	I1216 04:17:03.090277 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.090285 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:03.090292 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:03.090357 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:03.118599 2088124 cri.go:89] found id: ""
	I1216 04:17:03.118628 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.118637 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:03.118643 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:03.118705 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:03.145364 2088124 cri.go:89] found id: ""
	I1216 04:17:03.145394 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.145403 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:03.145411 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:03.145476 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:03.174022 2088124 cri.go:89] found id: ""
	I1216 04:17:03.174047 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.174057 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:03.174064 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:03.174132 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:03.201495 2088124 cri.go:89] found id: ""
	I1216 04:17:03.201518 2088124 logs.go:282] 0 containers: []
	W1216 04:17:03.201527 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:03.201537 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:03.201549 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:03.259166 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:03.259202 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:03.276281 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:03.276319 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:03.347465 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:03.338991   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.339696   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341341   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341888   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.343162   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:03.338991   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.339696   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341341   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.341888   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:03.343162   11219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:03.347486 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:03.347499 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:03.374421 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:03.374460 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:05.905789 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:05.917930 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:05.918028 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:05.944068 2088124 cri.go:89] found id: ""
	I1216 04:17:05.944092 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.944100 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:05.944106 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:05.944170 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:05.971887 2088124 cri.go:89] found id: ""
	I1216 04:17:05.971915 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.971924 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:05.971931 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:05.971998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:05.999415 2088124 cri.go:89] found id: ""
	I1216 04:17:05.999452 2088124 logs.go:282] 0 containers: []
	W1216 04:17:05.999467 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:05.999474 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:05.999547 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:06.038021 2088124 cri.go:89] found id: ""
	I1216 04:17:06.038109 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.038128 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:06.038138 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:06.038231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:06.069582 2088124 cri.go:89] found id: ""
	I1216 04:17:06.069610 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.069620 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:06.069626 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:06.069702 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:06.102728 2088124 cri.go:89] found id: ""
	I1216 04:17:06.102753 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.102763 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:06.102770 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:06.102846 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:06.131178 2088124 cri.go:89] found id: ""
	I1216 04:17:06.131372 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.131401 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:06.131420 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:06.131527 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:06.158881 2088124 cri.go:89] found id: ""
	I1216 04:17:06.158966 2088124 logs.go:282] 0 containers: []
	W1216 04:17:06.158996 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:06.159061 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:06.159098 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:06.185524 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:06.185554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:06.221206 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:06.221235 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:06.280309 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:06.280357 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:06.297032 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:06.297065 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:06.363186 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:06.354862   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.355590   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357189   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357554   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.359080   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:06.354862   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.355590   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357189   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.357554   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:06.359080   11347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:08.864854 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:08.875530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:08.875607 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:08.900341 2088124 cri.go:89] found id: ""
	I1216 04:17:08.900376 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.900386 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:08.900392 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:08.900453 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:08.924614 2088124 cri.go:89] found id: ""
	I1216 04:17:08.924638 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.924647 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:08.924653 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:08.924715 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:08.949702 2088124 cri.go:89] found id: ""
	I1216 04:17:08.949729 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.949738 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:08.949744 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:08.949803 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:08.973818 2088124 cri.go:89] found id: ""
	I1216 04:17:08.973848 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.973858 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:08.973864 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:08.973923 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:08.999010 2088124 cri.go:89] found id: ""
	I1216 04:17:08.999033 2088124 logs.go:282] 0 containers: []
	W1216 04:17:08.999079 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:08.999087 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:08.999149 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:09.030095 2088124 cri.go:89] found id: ""
	I1216 04:17:09.030122 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.030131 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:09.030138 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:09.030198 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:09.054300 2088124 cri.go:89] found id: ""
	I1216 04:17:09.054324 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.054332 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:09.054339 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:09.054397 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:09.078301 2088124 cri.go:89] found id: ""
	I1216 04:17:09.078328 2088124 logs.go:282] 0 containers: []
	W1216 04:17:09.078337 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:09.078346 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:09.078358 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:09.106185 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:09.106220 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:09.161474 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:09.161513 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:09.177365 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:09.177394 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:09.242353 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:09.233841   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.234299   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236131   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236653   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.238465   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:09.233841   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.234299   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236131   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.236653   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:09.238465   11460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:09.242378 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:09.242392 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:11.767582 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:11.779587 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:11.779667 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:11.806280 2088124 cri.go:89] found id: ""
	I1216 04:17:11.806308 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.806317 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:11.806323 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:11.806386 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:11.831161 2088124 cri.go:89] found id: ""
	I1216 04:17:11.831187 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.831196 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:11.831203 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:11.831262 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:11.859758 2088124 cri.go:89] found id: ""
	I1216 04:17:11.859781 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.859790 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:11.859796 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:11.859853 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:11.884445 2088124 cri.go:89] found id: ""
	I1216 04:17:11.884473 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.884483 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:11.884489 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:11.884567 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:11.909783 2088124 cri.go:89] found id: ""
	I1216 04:17:11.909860 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.909886 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:11.909904 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:11.909989 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:11.934802 2088124 cri.go:89] found id: ""
	I1216 04:17:11.934833 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.934842 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:11.934848 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:11.934909 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:11.961240 2088124 cri.go:89] found id: ""
	I1216 04:17:11.961318 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.961344 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:11.961358 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:11.961431 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:11.985352 2088124 cri.go:89] found id: ""
	I1216 04:17:11.985380 2088124 logs.go:282] 0 containers: []
	W1216 04:17:11.985389 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:11.985404 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:11.985416 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:12.050891 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:12.042955   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.043613   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045154   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045461   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.046975   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:12.042955   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.043613   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045154   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.045461   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:12.046975   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:12.050912 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:12.050925 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:12.076153 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:12.076186 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:12.108364 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:12.108393 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:12.164122 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:12.164161 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:14.681316 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:14.698056 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:14.698131 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:14.764358 2088124 cri.go:89] found id: ""
	I1216 04:17:14.764382 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.764391 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:14.764397 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:14.764468 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:14.792079 2088124 cri.go:89] found id: ""
	I1216 04:17:14.792110 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.792120 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:14.792130 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:14.792197 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:14.817831 2088124 cri.go:89] found id: ""
	I1216 04:17:14.817857 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.817867 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:14.817875 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:14.817935 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:14.846609 2088124 cri.go:89] found id: ""
	I1216 04:17:14.846638 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.846646 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:14.846653 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:14.846712 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:14.871213 2088124 cri.go:89] found id: ""
	I1216 04:17:14.871237 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.871246 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:14.871255 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:14.871313 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:14.896165 2088124 cri.go:89] found id: ""
	I1216 04:17:14.896192 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.896201 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:14.896208 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:14.896269 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:14.922595 2088124 cri.go:89] found id: ""
	I1216 04:17:14.922621 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.922629 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:14.922635 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:14.922698 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:14.949236 2088124 cri.go:89] found id: ""
	I1216 04:17:14.949303 2088124 logs.go:282] 0 containers: []
	W1216 04:17:14.949327 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:14.949344 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:14.949356 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:15.027151 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:15.009633   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.011301   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.012737   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.013346   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.016022   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:15.009633   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.011301   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.012737   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.013346   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:15.016022   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:15.027238 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:15.027269 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:15.060605 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:15.060646 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:15.093643 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:15.093728 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:15.150597 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:15.150635 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:17.668643 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:17.679947 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:17.680020 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:17.722386 2088124 cri.go:89] found id: ""
	I1216 04:17:17.722409 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.722417 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:17.722423 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:17.722487 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:17.775941 2088124 cri.go:89] found id: ""
	I1216 04:17:17.775964 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.775974 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:17.775980 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:17.776040 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:17.802436 2088124 cri.go:89] found id: ""
	I1216 04:17:17.802458 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.802467 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:17.802473 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:17.802532 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:17.828371 2088124 cri.go:89] found id: ""
	I1216 04:17:17.828399 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.828409 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:17.828415 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:17.828479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:17.853344 2088124 cri.go:89] found id: ""
	I1216 04:17:17.853370 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.853379 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:17.853386 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:17.853479 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:17.881429 2088124 cri.go:89] found id: ""
	I1216 04:17:17.881456 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.881465 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:17.881471 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:17.881533 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:17.904862 2088124 cri.go:89] found id: ""
	I1216 04:17:17.904938 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.904961 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:17.904975 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:17.905050 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:17.929897 2088124 cri.go:89] found id: ""
	I1216 04:17:17.929977 2088124 logs.go:282] 0 containers: []
	W1216 04:17:17.930001 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:17.930028 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:17.930064 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:17.998744 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:17.990296   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.990760   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.992790   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.993233   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.994439   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:17.990296   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.990760   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.992790   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.993233   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:17.994439   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:17.998813 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:17.998840 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:18.026132 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:18.026171 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:18.058645 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:18.058676 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:18.115432 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:18.115467 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:20.631899 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:20.643452 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:20.643535 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:20.668165 2088124 cri.go:89] found id: ""
	I1216 04:17:20.668190 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.668199 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:20.668205 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:20.668263 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:20.724732 2088124 cri.go:89] found id: ""
	I1216 04:17:20.724759 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.724768 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:20.724774 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:20.724845 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:20.771015 2088124 cri.go:89] found id: ""
	I1216 04:17:20.771058 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.771068 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:20.771075 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:20.771155 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:20.805632 2088124 cri.go:89] found id: ""
	I1216 04:17:20.805662 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.805672 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:20.805679 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:20.805747 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:20.835160 2088124 cri.go:89] found id: ""
	I1216 04:17:20.835226 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.835242 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:20.835249 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:20.835308 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:20.861499 2088124 cri.go:89] found id: ""
	I1216 04:17:20.861522 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.861531 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:20.861538 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:20.861595 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:20.885895 2088124 cri.go:89] found id: ""
	I1216 04:17:20.885919 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.885928 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:20.885934 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:20.885998 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:20.910445 2088124 cri.go:89] found id: ""
	I1216 04:17:20.910468 2088124 logs.go:282] 0 containers: []
	W1216 04:17:20.910477 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:20.910486 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:20.910498 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:20.966176 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:20.966211 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:20.983062 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:20.983092 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:21.049819 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:21.041149   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.041819   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.043484   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.044157   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.045775   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:21.041149   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.041819   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.043484   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.044157   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:21.045775   11905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:21.049842 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:21.049856 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:21.075330 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:21.075370 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:23.603121 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:23.613760 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:23.613834 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:23.642856 2088124 cri.go:89] found id: ""
	I1216 04:17:23.642882 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.642890 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:23.642897 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:23.642957 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:23.671150 2088124 cri.go:89] found id: ""
	I1216 04:17:23.671175 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.671183 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:23.671189 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:23.671247 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:23.733230 2088124 cri.go:89] found id: ""
	I1216 04:17:23.733256 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.733265 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:23.733271 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:23.733330 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:23.782653 2088124 cri.go:89] found id: ""
	I1216 04:17:23.782679 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.782688 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:23.782694 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:23.782759 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:23.810224 2088124 cri.go:89] found id: ""
	I1216 04:17:23.810249 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.810259 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:23.810266 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:23.810327 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:23.835579 2088124 cri.go:89] found id: ""
	I1216 04:17:23.835604 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.835613 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:23.835620 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:23.835680 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:23.864585 2088124 cri.go:89] found id: ""
	I1216 04:17:23.864610 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.864618 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:23.864625 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:23.864683 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:23.892217 2088124 cri.go:89] found id: ""
	I1216 04:17:23.892294 2088124 logs.go:282] 0 containers: []
	W1216 04:17:23.892311 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:23.892322 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:23.892334 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:23.955889 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:23.947392   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.947993   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949516   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949846   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.951412   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:23.947392   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.947993   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949516   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.949846   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:23.951412   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:23.955910 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:23.955929 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:23.983017 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:23.983064 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:24.018919 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:24.018946 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:24.076537 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:24.076578 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:26.592968 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:26.603896 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:26.603971 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:26.628560 2088124 cri.go:89] found id: ""
	I1216 04:17:26.628583 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.628591 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:26.628597 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:26.628663 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:26.655525 2088124 cri.go:89] found id: ""
	I1216 04:17:26.655549 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.655558 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:26.655564 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:26.655627 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:26.681142 2088124 cri.go:89] found id: ""
	I1216 04:17:26.681169 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.681178 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:26.681185 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:26.681245 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:26.726046 2088124 cri.go:89] found id: ""
	I1216 04:17:26.726069 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.726078 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:26.726084 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:26.726145 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:26.761483 2088124 cri.go:89] found id: ""
	I1216 04:17:26.761558 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.761570 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:26.761578 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:26.761670 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:26.804988 2088124 cri.go:89] found id: ""
	I1216 04:17:26.805062 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.805085 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:26.805104 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:26.805191 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:26.835017 2088124 cri.go:89] found id: ""
	I1216 04:17:26.835107 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.835132 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:26.835146 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:26.835222 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:26.864963 2088124 cri.go:89] found id: ""
	I1216 04:17:26.864989 2088124 logs.go:282] 0 containers: []
	W1216 04:17:26.864998 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:26.865008 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:26.865020 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:26.920931 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:26.920966 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:26.936801 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:26.936828 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:27.001379 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:26.993556   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.993942   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.995579   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.996105   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.997606   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:26.993556   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.993942   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.995579   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.996105   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:26.997606   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:27.001453 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:27.001473 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:27.029301 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:27.029338 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:29.560341 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:29.570732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:29.570810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:29.594792 2088124 cri.go:89] found id: ""
	I1216 04:17:29.594819 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.594828 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:29.594835 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:29.594900 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:29.619488 2088124 cri.go:89] found id: ""
	I1216 04:17:29.619514 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.619523 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:29.619530 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:29.619589 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:29.644688 2088124 cri.go:89] found id: ""
	I1216 04:17:29.644711 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.644720 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:29.644726 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:29.644792 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:29.670117 2088124 cri.go:89] found id: ""
	I1216 04:17:29.670143 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.670152 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:29.670158 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:29.670246 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:29.744231 2088124 cri.go:89] found id: ""
	I1216 04:17:29.744258 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.744267 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:29.744273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:29.744333 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:29.784178 2088124 cri.go:89] found id: ""
	I1216 04:17:29.784201 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.784211 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:29.784217 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:29.784278 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:29.813318 2088124 cri.go:89] found id: ""
	I1216 04:17:29.813341 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.813349 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:29.813355 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:29.813414 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:29.841947 2088124 cri.go:89] found id: ""
	I1216 04:17:29.841973 2088124 logs.go:282] 0 containers: []
	W1216 04:17:29.841981 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:29.841991 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:29.842003 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:29.872423 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:29.872449 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:29.927890 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:29.927927 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:29.943872 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:29.943903 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:30.030211 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:30.002270   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.003334   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.011701   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.013525   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.017907   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:30.002270   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.003334   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.011701   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.013525   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:30.017907   12251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:30.030233 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:30.030247 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:32.571327 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:32.582193 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:32.582264 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:32.614548 2088124 cri.go:89] found id: ""
	I1216 04:17:32.614575 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.614584 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:32.614591 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:32.614656 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:32.639581 2088124 cri.go:89] found id: ""
	I1216 04:17:32.639609 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.639618 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:32.639624 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:32.639690 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:32.664409 2088124 cri.go:89] found id: ""
	I1216 04:17:32.664431 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.664440 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:32.664446 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:32.664540 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:32.702042 2088124 cri.go:89] found id: ""
	I1216 04:17:32.702068 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.702077 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:32.702083 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:32.702143 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:32.744945 2088124 cri.go:89] found id: ""
	I1216 04:17:32.744972 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.744981 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:32.744988 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:32.745073 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:32.789635 2088124 cri.go:89] found id: ""
	I1216 04:17:32.789662 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.789671 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:32.789678 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:32.789739 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:32.815679 2088124 cri.go:89] found id: ""
	I1216 04:17:32.815707 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.815717 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:32.815724 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:32.815787 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:32.841170 2088124 cri.go:89] found id: ""
	I1216 04:17:32.841195 2088124 logs.go:282] 0 containers: []
	W1216 04:17:32.841204 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:32.841213 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:32.841224 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:32.897709 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:32.897747 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:32.913830 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:32.913862 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:32.978618 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:32.969623   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.970476   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972369   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972985   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.974627   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:32.969623   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.970476   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972369   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.972985   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:32.974627   12351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:32.978642 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:32.978655 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:33.004220 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:33.004272 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:35.534506 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:35.545218 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:35.545290 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:35.570921 2088124 cri.go:89] found id: ""
	I1216 04:17:35.570949 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.570958 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:35.570965 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:35.571023 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:35.596188 2088124 cri.go:89] found id: ""
	I1216 04:17:35.596216 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.596226 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:35.596232 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:35.596290 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:35.621275 2088124 cri.go:89] found id: ""
	I1216 04:17:35.621298 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.621307 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:35.621313 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:35.621373 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:35.646280 2088124 cri.go:89] found id: ""
	I1216 04:17:35.646304 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.646312 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:35.646319 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:35.646380 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:35.674777 2088124 cri.go:89] found id: ""
	I1216 04:17:35.674850 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.674874 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:35.674894 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:35.674969 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:35.734693 2088124 cri.go:89] found id: ""
	I1216 04:17:35.734716 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.734725 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:35.734732 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:35.734792 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:35.776099 2088124 cri.go:89] found id: ""
	I1216 04:17:35.776121 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.776129 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:35.776136 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:35.776195 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:35.809643 2088124 cri.go:89] found id: ""
	I1216 04:17:35.809720 2088124 logs.go:282] 0 containers: []
	W1216 04:17:35.809744 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:35.809765 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:35.809805 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:35.865415 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:35.865452 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:35.880891 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:35.880969 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:35.943467 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:35.935628   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.936425   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938104   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938398   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.939836   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:35.935628   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.936425   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938104   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.938398   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:35.939836   12461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:35.943485 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:35.943497 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:35.968153 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:35.968187 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:38.502135 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:38.512843 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:38.512915 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:38.537512 2088124 cri.go:89] found id: ""
	I1216 04:17:38.537537 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.537546 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:38.537553 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:38.537618 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:38.563124 2088124 cri.go:89] found id: ""
	I1216 04:17:38.563159 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.563168 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:38.563174 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:38.563265 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:38.589894 2088124 cri.go:89] found id: ""
	I1216 04:17:38.589918 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.589927 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:38.589933 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:38.590001 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:38.615078 2088124 cri.go:89] found id: ""
	I1216 04:17:38.615104 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.615114 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:38.615120 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:38.615188 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:38.640365 2088124 cri.go:89] found id: ""
	I1216 04:17:38.640397 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.640406 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:38.640416 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:38.640486 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:38.664018 2088124 cri.go:89] found id: ""
	I1216 04:17:38.664095 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.664116 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:38.664125 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:38.664194 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:38.704314 2088124 cri.go:89] found id: ""
	I1216 04:17:38.704341 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.704350 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:38.704356 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:38.704415 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:38.747321 2088124 cri.go:89] found id: ""
	I1216 04:17:38.747349 2088124 logs.go:282] 0 containers: []
	W1216 04:17:38.747357 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:38.747366 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:38.747377 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:38.778906 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:38.778937 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:38.846005 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:38.837440   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.837970   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.839654   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.840202   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.841808   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:38.837440   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.837970   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.839654   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.840202   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:38.841808   12571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:38.846026 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:38.846039 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:38.872344 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:38.872381 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:38.907009 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:38.907060 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:41.467452 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:41.478044 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:41.478160 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:41.505036 2088124 cri.go:89] found id: ""
	I1216 04:17:41.505062 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.505072 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:41.505079 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:41.505163 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:41.533010 2088124 cri.go:89] found id: ""
	I1216 04:17:41.533044 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.533054 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:41.533078 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:41.533160 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:41.557094 2088124 cri.go:89] found id: ""
	I1216 04:17:41.557166 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.557181 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:41.557188 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:41.557261 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:41.585674 2088124 cri.go:89] found id: ""
	I1216 04:17:41.585718 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.585727 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:41.585734 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:41.585805 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:41.610276 2088124 cri.go:89] found id: ""
	I1216 04:17:41.610311 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.610320 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:41.610327 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:41.610398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:41.636914 2088124 cri.go:89] found id: ""
	I1216 04:17:41.636981 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.637010 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:41.637025 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:41.637097 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:41.665097 2088124 cri.go:89] found id: ""
	I1216 04:17:41.665161 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.665187 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:41.665202 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:41.665279 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:41.727525 2088124 cri.go:89] found id: ""
	I1216 04:17:41.727553 2088124 logs.go:282] 0 containers: []
	W1216 04:17:41.727562 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:41.727571 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:41.727589 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:41.817873 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:41.817913 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:41.834790 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:41.834817 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:41.903430 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:41.895682   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.896090   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897605   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897915   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.899505   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:41.895682   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.896090   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897605   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.897915   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:41.899505   12687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:41.903453 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:41.903465 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:41.928600 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:41.928640 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:44.456049 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:44.466779 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:44.466853 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:44.493084 2088124 cri.go:89] found id: ""
	I1216 04:17:44.493110 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.493119 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:44.493126 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:44.493185 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:44.517683 2088124 cri.go:89] found id: ""
	I1216 04:17:44.517717 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.517727 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:44.517734 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:44.517810 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:44.541717 2088124 cri.go:89] found id: ""
	I1216 04:17:44.541749 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.541758 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:44.541764 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:44.541830 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:44.565684 2088124 cri.go:89] found id: ""
	I1216 04:17:44.565711 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.565723 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:44.565729 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:44.565796 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:44.590246 2088124 cri.go:89] found id: ""
	I1216 04:17:44.590285 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.590293 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:44.590300 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:44.590372 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:44.618255 2088124 cri.go:89] found id: ""
	I1216 04:17:44.618284 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.618292 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:44.618299 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:44.618367 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:44.648191 2088124 cri.go:89] found id: ""
	I1216 04:17:44.648219 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.648228 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:44.648234 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:44.648295 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:44.679499 2088124 cri.go:89] found id: ""
	I1216 04:17:44.679574 2088124 logs.go:282] 0 containers: []
	W1216 04:17:44.679598 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:44.679615 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:44.679640 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:44.758228 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:44.758267 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:44.779294 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:44.779331 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:44.858723 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:44.849709   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.850588   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852326   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852658   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.854144   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:44.849709   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.850588   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852326   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.852658   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:44.854144   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:44.858749 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:44.858764 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:44.883969 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:44.884008 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:47.413411 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:47.423987 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:47.424106 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:47.449248 2088124 cri.go:89] found id: ""
	I1216 04:17:47.449314 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.449329 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:47.449336 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:47.449398 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:47.475548 2088124 cri.go:89] found id: ""
	I1216 04:17:47.475578 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.475587 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:47.475593 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:47.475655 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:47.500110 2088124 cri.go:89] found id: ""
	I1216 04:17:47.500177 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.500199 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:47.500218 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:47.500306 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:47.530628 2088124 cri.go:89] found id: ""
	I1216 04:17:47.530696 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.530723 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:47.530741 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:47.530826 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:47.556437 2088124 cri.go:89] found id: ""
	I1216 04:17:47.556464 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.556473 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:47.556479 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:47.556549 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:47.581048 2088124 cri.go:89] found id: ""
	I1216 04:17:47.581071 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.581081 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:47.581088 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:47.581148 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:47.606560 2088124 cri.go:89] found id: ""
	I1216 04:17:47.606588 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.606596 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:47.606603 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:47.606663 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:47.640327 2088124 cri.go:89] found id: ""
	I1216 04:17:47.640352 2088124 logs.go:282] 0 containers: []
	W1216 04:17:47.640360 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:47.640370 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:47.640388 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:47.702815 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:47.702920 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:47.736710 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:47.736751 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:47.839518 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:47.830501   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.831305   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833129   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833682   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.835432   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:47.830501   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.831305   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833129   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.833682   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:47.835432   12916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:47.839540 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:47.839554 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:47.865722 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:47.865758 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:50.397056 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:50.409097 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:50.409241 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:50.437681 2088124 cri.go:89] found id: ""
	I1216 04:17:50.437704 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.437714 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:50.437743 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:50.437829 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:50.462756 2088124 cri.go:89] found id: ""
	I1216 04:17:50.462783 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.462791 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:50.462798 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:50.462914 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:50.487724 2088124 cri.go:89] found id: ""
	I1216 04:17:50.487751 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.487760 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:50.487767 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:50.487873 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:50.513141 2088124 cri.go:89] found id: ""
	I1216 04:17:50.513208 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.513219 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:50.513237 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:50.513315 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:50.538993 2088124 cri.go:89] found id: ""
	I1216 04:17:50.539100 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.539124 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:50.539144 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:50.539231 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:50.564296 2088124 cri.go:89] found id: ""
	I1216 04:17:50.564319 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.564328 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:50.564335 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:50.564395 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:50.587840 2088124 cri.go:89] found id: ""
	I1216 04:17:50.587865 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.587874 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:50.587880 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:50.587941 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:50.616481 2088124 cri.go:89] found id: ""
	I1216 04:17:50.616555 2088124 logs.go:282] 0 containers: []
	W1216 04:17:50.616577 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:50.616595 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:50.616611 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:50.674183 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:50.674218 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:50.705566 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:50.705596 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:50.817242 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:50.808677   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.809401   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811128   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811612   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.813344   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:50.808677   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.809401   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811128   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.811612   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:50.813344   13034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:50.817265 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:50.817278 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:50.842758 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:50.842792 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:53.372576 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:53.383245 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:53.383313 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:53.407745 2088124 cri.go:89] found id: ""
	I1216 04:17:53.407767 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.407775 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:53.407781 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:53.407839 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:53.435170 2088124 cri.go:89] found id: ""
	I1216 04:17:53.435194 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.435203 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:53.435209 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:53.435268 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:53.461399 2088124 cri.go:89] found id: ""
	I1216 04:17:53.461426 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.461437 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:53.461443 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:53.461504 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:53.492254 2088124 cri.go:89] found id: ""
	I1216 04:17:53.492279 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.492289 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:53.492295 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:53.492356 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:53.515778 2088124 cri.go:89] found id: ""
	I1216 04:17:53.515802 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.515810 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:53.515816 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:53.515875 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:53.539474 2088124 cri.go:89] found id: ""
	I1216 04:17:53.539498 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.539508 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:53.539514 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:53.539576 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:53.565164 2088124 cri.go:89] found id: ""
	I1216 04:17:53.565229 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.565255 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:53.565273 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:53.565359 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:53.589875 2088124 cri.go:89] found id: ""
	I1216 04:17:53.589941 2088124 logs.go:282] 0 containers: []
	W1216 04:17:53.589963 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:53.589984 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:53.590026 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:53.654018 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:53.644813   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.645597   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647434   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647966   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.649437   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:53.644813   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.645597   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647434   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.647966   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:53.649437   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:53.654042 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:53.654059 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:53.679510 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:53.679548 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:53.719485 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:53.719514 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:53.792435 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:53.792471 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:56.314262 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:56.325267 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:17:56.325348 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:17:56.350886 2088124 cri.go:89] found id: ""
	I1216 04:17:56.350908 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.350917 2088124 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:17:56.350923 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1216 04:17:56.350985 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:17:56.375203 2088124 cri.go:89] found id: ""
	I1216 04:17:56.375230 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.375239 2088124 logs.go:284] No container was found matching "etcd"
	I1216 04:17:56.375246 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1216 04:17:56.375305 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:17:56.400956 2088124 cri.go:89] found id: ""
	I1216 04:17:56.400980 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.400988 2088124 logs.go:284] No container was found matching "coredns"
	I1216 04:17:56.400994 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:17:56.401055 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:17:56.426054 2088124 cri.go:89] found id: ""
	I1216 04:17:56.426077 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.426086 2088124 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:17:56.426093 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:17:56.426154 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:17:56.451881 2088124 cri.go:89] found id: ""
	I1216 04:17:56.451905 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.451914 2088124 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:17:56.451920 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:17:56.452029 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:17:56.483163 2088124 cri.go:89] found id: ""
	I1216 04:17:56.483190 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.483199 2088124 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:17:56.483223 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1216 04:17:56.483297 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:17:56.509283 2088124 cri.go:89] found id: ""
	I1216 04:17:56.509307 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.509316 2088124 logs.go:284] No container was found matching "kindnet"
	I1216 04:17:56.509321 2088124 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 04:17:56.509386 2088124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 04:17:56.533713 2088124 cri.go:89] found id: ""
	I1216 04:17:56.533788 2088124 logs.go:282] 0 containers: []
	W1216 04:17:56.533813 2088124 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 04:17:56.533851 2088124 logs.go:123] Gathering logs for kubelet ...
	I1216 04:17:56.533883 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:17:56.591786 2088124 logs.go:123] Gathering logs for dmesg ...
	I1216 04:17:56.591822 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:17:56.608010 2088124 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:17:56.608041 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:17:56.677352 2088124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:17:56.669278   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.669934   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.671527   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.672102   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.673230   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:17:56.669278   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.669934   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.671527   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.672102   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:17:56.673230   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:17:56.677375 2088124 logs.go:123] Gathering logs for containerd ...
	I1216 04:17:56.677388 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1216 04:17:56.710597 2088124 logs.go:123] Gathering logs for container status ...
	I1216 04:17:56.710632 2088124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:17:59.260233 2088124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:17:59.274612 2088124 out.go:203] 
	W1216 04:17:59.277673 2088124 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 04:17:59.277728 2088124 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 04:17:59.277743 2088124 out.go:285] * Related issues:
	W1216 04:17:59.277759 2088124 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1216 04:17:59.277770 2088124 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1216 04:17:59.280576 2088124 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617588600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617668196Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617791517Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617872237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.617937228Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618063593Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618133950Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618196685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618268330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618361662Z" level=info msg="Connect containerd service"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.618902392Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.619957818Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.633592863Z" level=info msg="Start subscribing containerd event"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.633677981Z" level=info msg="Start recovering state"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.635272389Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.635428480Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673007758Z" level=info msg="Start event monitor"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673059047Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673070189Z" level=info msg="Start streaming server"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673079715Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673091785Z" level=info msg="runtime interface starting up..."
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673098340Z" level=info msg="starting plugins..."
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673130848Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:11:55 newest-cni-450938 containerd[557]: time="2025-12-16T04:11:55.673430452Z" level=info msg="containerd successfully booted in 0.082787s"
	Dec 16 04:11:55 newest-cni-450938 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:18:12.103569   13916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:12.104351   13916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:12.105968   13916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:12.106562   13916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:18:12.108139   13916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +41.065751] overlayfs: idmapped layers are currently not supported
	[Dec16 01:35] overlayfs: idmapped layers are currently not supported
	[Dec16 01:36] overlayfs: idmapped layers are currently not supported
	[Dec16 01:37] overlayfs: idmapped layers are currently not supported
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:18:12 up 10:00,  0 user,  load average: 0.97, 0.66, 1.08
	Linux newest-cni-450938 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:18:08 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:09 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 16 04:18:09 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:09 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:09 newest-cni-450938 kubelet[13777]: E1216 04:18:09.246006   13777 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:09 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:09 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:09 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 16 04:18:09 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:09 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:09 newest-cni-450938 kubelet[13783]: E1216 04:18:09.996497   13783 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:10 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:10 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:10 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 16 04:18:10 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:10 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:10 newest-cni-450938 kubelet[13818]: E1216 04:18:10.747406   13818 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:10 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:10 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:18:11 newest-cni-450938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 16 04:18:11 newest-cni-450938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:11 newest-cni-450938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:18:11 newest-cni-450938 kubelet[13824]: E1216 04:18:11.506102   13824 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:18:11 newest-cni-450938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:18:11 newest-cni-450938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-450938 -n newest-cni-450938: exit status 2 (358.598452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-450938" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (268.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:20:05.534903 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:20:07.931223 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:20:51.130825 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1216 04:21:09.520955 1798370 config.go:182] Loaded profile config "flannel-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:21:28.599465 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:23:44.846883 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1216 04:23:51.133663 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1216 04:24:09.308845 1798370 config.go:182] Loaded profile config "custom-flannel-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 2 (308.142289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-255023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-255023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.182µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-255023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-255023
helpers_test.go:244: (dbg) docker inspect no-preload-255023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	        "Created": "2025-12-16T03:54:15.810217174Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2079014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:04:36.43296942Z",
	            "FinishedAt": "2025-12-16T04:04:35.01536344Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e/9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e-json.log",
	        "Name": "/no-preload-255023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-255023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-255023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e19dbb9154ccf16a817569d9e40f4d8d4618fd2558a46d36ad8be01ad17a04e",
	                "LowerDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72bc3af2b14b0da1d871fa29a4fefe1eb595d151e5d46a2fc2291635addba30c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-255023",
	                "Source": "/var/lib/docker/volumes/no-preload-255023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-255023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-255023",
	                "name.minikube.sigs.k8s.io": "no-preload-255023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8d77d7563a5b808d67c856f8fa0badaaabd481cb09d94e5909e754d7a8568f2",
	            "SandboxKey": "/var/run/docker/netns/e8d77d7563a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34664"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34665"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34668"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34666"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34667"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-255023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:af:07:e2:16:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba784dbb0bf675265a222a2ccbfc260249ee6464ab188d5ef5e9194204ab459f",
	                    "EndpointID": "d7abbd133c0576ac3aee0fa6c955e27a282475749fdbc6a2ade67d17e9ffc12d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-255023",
	                        "9e19dbb9154c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023: exit status 2 (366.774287ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-255023 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-167684 sudo systemctl cat kubelet --no-pager                                                                                                                   │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo systemctl status docker --all --full --no-pager                                                                                                    │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │                     │
	│ ssh     │ -p calico-167684 sudo systemctl cat docker --no-pager                                                                                                                    │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo cat /etc/docker/daemon.json                                                                                                                        │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │                     │
	│ ssh     │ -p calico-167684 sudo docker system info                                                                                                                                 │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │                     │
	│ ssh     │ -p calico-167684 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │                     │
	│ ssh     │ -p calico-167684 sudo systemctl cat cri-docker --no-pager                                                                                                                │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │                     │
	│ ssh     │ -p calico-167684 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo cri-dockerd --version                                                                                                                              │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo systemctl status containerd --all --full --no-pager                                                                                                │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo systemctl cat containerd --no-pager                                                                                                                │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo cat /etc/containerd/config.toml                                                                                                                    │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo containerd config dump                                                                                                                             │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo systemctl status crio --all --full --no-pager                                                                                                      │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │                     │
	│ ssh     │ -p calico-167684 sudo systemctl cat crio --no-pager                                                                                                                      │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ ssh     │ -p calico-167684 sudo crio config                                                                                                                                        │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ delete  │ -p calico-167684                                                                                                                                                         │ calico-167684         │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:23 UTC │
	│ start   │ -p custom-flannel-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd │ custom-flannel-167684 │ jenkins │ v1.37.0 │ 16 Dec 25 04:23 UTC │ 16 Dec 25 04:24 UTC │
	│ ssh     │ -p custom-flannel-167684 pgrep -a kubelet                                                                                                                                │ custom-flannel-167684 │ jenkins │ v1.37.0 │ 16 Dec 25 04:24 UTC │ 16 Dec 25 04:24 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:23:15
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:23:15.958392 2128490 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:23:15.958536 2128490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:23:15.958548 2128490 out.go:374] Setting ErrFile to fd 2...
	I1216 04:23:15.958576 2128490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:23:15.958882 2128490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 04:23:15.959393 2128490 out.go:368] Setting JSON to false
	I1216 04:23:15.960309 2128490 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":36340,"bootTime":1765822656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 04:23:15.960377 2128490 start.go:143] virtualization:  
	I1216 04:23:15.964545 2128490 out.go:179] * [custom-flannel-167684] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:23:15.969357 2128490 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:23:15.969372 2128490 notify.go:221] Checking for updates...
	I1216 04:23:15.973089 2128490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:23:15.976638 2128490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:23:15.979960 2128490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 04:23:15.983126 2128490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:23:15.986398 2128490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:23:15.990138 2128490 config.go:182] Loaded profile config "no-preload-255023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 04:23:15.990261 2128490 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:23:16.024434 2128490 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:23:16.024577 2128490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:23:16.083801 2128490 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:23:16.074318899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:23:16.083911 2128490 docker.go:319] overlay module found
	I1216 04:23:16.087357 2128490 out.go:179] * Using the docker driver based on user configuration
	I1216 04:23:16.090426 2128490 start.go:309] selected driver: docker
	I1216 04:23:16.090446 2128490 start.go:927] validating driver "docker" against <nil>
	I1216 04:23:16.090461 2128490 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:23:16.091333 2128490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:23:16.149526 2128490 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:23:16.139966588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:23:16.149688 2128490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:23:16.149927 2128490 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:23:16.153071 2128490 out.go:179] * Using Docker driver with root privileges
	I1216 04:23:16.156082 2128490 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1216 04:23:16.156123 2128490 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1216 04:23:16.156196 2128490 start.go:353] cluster config:
	{Name:custom-flannel-167684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-167684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:23:16.159449 2128490 out.go:179] * Starting "custom-flannel-167684" primary control-plane node in "custom-flannel-167684" cluster
	I1216 04:23:16.162574 2128490 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 04:23:16.165642 2128490 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:23:16.168631 2128490 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1216 04:23:16.168686 2128490 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1216 04:23:16.168697 2128490 cache.go:65] Caching tarball of preloaded images
	I1216 04:23:16.168738 2128490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:23:16.168789 2128490 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 04:23:16.168799 2128490 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1216 04:23:16.168921 2128490 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/config.json ...
	I1216 04:23:16.168939 2128490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/config.json: {Name:mk772324851e76f79bb896c69ee1982d91e91559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:16.191561 2128490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:23:16.191582 2128490 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:23:16.191597 2128490 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:23:16.191627 2128490 start.go:360] acquireMachinesLock for custom-flannel-167684: {Name:mke3c2aa271751e4e111915e1b76acf967c95442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:23:16.191731 2128490 start.go:364] duration metric: took 89.409µs to acquireMachinesLock for "custom-flannel-167684"
	I1216 04:23:16.191756 2128490 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-167684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-167684 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:23:16.191828 2128490 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:23:16.195322 2128490 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 04:23:16.195557 2128490 start.go:159] libmachine.API.Create for "custom-flannel-167684" (driver="docker")
	I1216 04:23:16.195587 2128490 client.go:173] LocalClient.Create starting
	I1216 04:23:16.195647 2128490 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
	I1216 04:23:16.195680 2128490 main.go:143] libmachine: Decoding PEM data...
	I1216 04:23:16.195696 2128490 main.go:143] libmachine: Parsing certificate...
	I1216 04:23:16.195749 2128490 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
	I1216 04:23:16.195767 2128490 main.go:143] libmachine: Decoding PEM data...
	I1216 04:23:16.195778 2128490 main.go:143] libmachine: Parsing certificate...
	I1216 04:23:16.196773 2128490 cli_runner.go:164] Run: docker network inspect custom-flannel-167684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:23:16.216746 2128490 cli_runner.go:211] docker network inspect custom-flannel-167684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:23:16.216838 2128490 network_create.go:284] running [docker network inspect custom-flannel-167684] to gather additional debugging logs...
	I1216 04:23:16.216856 2128490 cli_runner.go:164] Run: docker network inspect custom-flannel-167684
	W1216 04:23:16.235936 2128490 cli_runner.go:211] docker network inspect custom-flannel-167684 returned with exit code 1
	I1216 04:23:16.235976 2128490 network_create.go:287] error running [docker network inspect custom-flannel-167684]: docker network inspect custom-flannel-167684: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-167684 not found
	I1216 04:23:16.235992 2128490 network_create.go:289] output of [docker network inspect custom-flannel-167684]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-167684 not found
	
	** /stderr **
	I1216 04:23:16.236104 2128490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:23:16.260131 2128490 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
	I1216 04:23:16.260520 2128490 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d705cdcdbc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:12:e3:47:7f:d3} reservation:<nil>}
	I1216 04:23:16.260793 2128490 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9eafaf3b4a19 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:e2:6e:50:29:6c:d7} reservation:<nil>}
	I1216 04:23:16.261215 2128490 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198f600}
	I1216 04:23:16.261238 2128490 network_create.go:124] attempt to create docker network custom-flannel-167684 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 04:23:16.261311 2128490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-167684 custom-flannel-167684
	I1216 04:23:16.320531 2128490 network_create.go:108] docker network custom-flannel-167684 192.168.76.0/24 created
	I1216 04:23:16.320584 2128490 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-167684" container
	I1216 04:23:16.320680 2128490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:23:16.337359 2128490 cli_runner.go:164] Run: docker volume create custom-flannel-167684 --label name.minikube.sigs.k8s.io=custom-flannel-167684 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:23:16.357052 2128490 oci.go:103] Successfully created a docker volume custom-flannel-167684
	I1216 04:23:16.357142 2128490 cli_runner.go:164] Run: docker run --rm --name custom-flannel-167684-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-167684 --entrypoint /usr/bin/test -v custom-flannel-167684:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:23:16.912460 2128490 oci.go:107] Successfully prepared a docker volume custom-flannel-167684
	I1216 04:23:16.912529 2128490 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1216 04:23:16.912548 2128490 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:23:16.912640 2128490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-167684:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:23:21.566559 2128490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-167684:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.653868766s)
	I1216 04:23:21.566593 2128490 kic.go:203] duration metric: took 4.654051006s to extract preloaded images to volume ...
	W1216 04:23:21.566726 2128490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:23:21.566856 2128490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:23:21.620259 2128490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-167684 --name custom-flannel-167684 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-167684 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-167684 --network custom-flannel-167684 --ip 192.168.76.2 --volume custom-flannel-167684:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:23:21.919835 2128490 cli_runner.go:164] Run: docker container inspect custom-flannel-167684 --format={{.State.Running}}
	I1216 04:23:21.938719 2128490 cli_runner.go:164] Run: docker container inspect custom-flannel-167684 --format={{.State.Status}}
	I1216 04:23:21.969426 2128490 cli_runner.go:164] Run: docker exec custom-flannel-167684 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:23:22.023881 2128490 oci.go:144] the created container "custom-flannel-167684" has a running status.
	I1216 04:23:22.023908 2128490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa...
	I1216 04:23:22.263495 2128490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:23:22.294784 2128490 cli_runner.go:164] Run: docker container inspect custom-flannel-167684 --format={{.State.Status}}
	I1216 04:23:22.317724 2128490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:23:22.317744 2128490 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-167684 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:23:22.386543 2128490 cli_runner.go:164] Run: docker container inspect custom-flannel-167684 --format={{.State.Status}}
	I1216 04:23:22.408321 2128490 machine.go:94] provisionDockerMachine start ...
	I1216 04:23:22.408587 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:22.437520 2128490 main.go:143] libmachine: Using SSH client type: native
	I1216 04:23:22.437873 2128490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34689 <nil> <nil>}
	I1216 04:23:22.437882 2128490 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:23:22.438597 2128490 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47212->127.0.0.1:34689: read: connection reset by peer
	I1216 04:23:25.570574 2128490 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-167684
	
	I1216 04:23:25.570602 2128490 ubuntu.go:182] provisioning hostname "custom-flannel-167684"
	I1216 04:23:25.570666 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:25.588390 2128490 main.go:143] libmachine: Using SSH client type: native
	I1216 04:23:25.588735 2128490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34689 <nil> <nil>}
	I1216 04:23:25.588753 2128490 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-167684 && echo "custom-flannel-167684" | sudo tee /etc/hostname
	I1216 04:23:25.733030 2128490 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-167684
	
	I1216 04:23:25.733123 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:25.750746 2128490 main.go:143] libmachine: Using SSH client type: native
	I1216 04:23:25.751119 2128490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34689 <nil> <nil>}
	I1216 04:23:25.751145 2128490 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-167684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-167684/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-167684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:23:25.883218 2128490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:23:25.883248 2128490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
	I1216 04:23:25.883278 2128490 ubuntu.go:190] setting up certificates
	I1216 04:23:25.883295 2128490 provision.go:84] configureAuth start
	I1216 04:23:25.883367 2128490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-167684
	I1216 04:23:25.900170 2128490 provision.go:143] copyHostCerts
	I1216 04:23:25.900239 2128490 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
	I1216 04:23:25.900257 2128490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
	I1216 04:23:25.900337 2128490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
	I1216 04:23:25.900441 2128490 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
	I1216 04:23:25.900452 2128490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
	I1216 04:23:25.900479 2128490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
	I1216 04:23:25.900553 2128490 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
	I1216 04:23:25.900577 2128490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
	I1216 04:23:25.900608 2128490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
	I1216 04:23:25.900672 2128490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-167684 san=[127.0.0.1 192.168.76.2 custom-flannel-167684 localhost minikube]
	I1216 04:23:26.046768 2128490 provision.go:177] copyRemoteCerts
	I1216 04:23:26.046847 2128490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:23:26.046891 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:26.065323 2128490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34689 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa Username:docker}
	I1216 04:23:26.163317 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1216 04:23:26.184089 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:23:26.202188 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:23:26.220369 2128490 provision.go:87] duration metric: took 337.047766ms to configureAuth
	I1216 04:23:26.220399 2128490 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:23:26.220591 2128490 config.go:182] Loaded profile config "custom-flannel-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 04:23:26.220605 2128490 machine.go:97] duration metric: took 3.812266177s to provisionDockerMachine
	I1216 04:23:26.220613 2128490 client.go:176] duration metric: took 10.02502078s to LocalClient.Create
	I1216 04:23:26.220635 2128490 start.go:167] duration metric: took 10.025079723s to libmachine.API.Create "custom-flannel-167684"
	I1216 04:23:26.220642 2128490 start.go:293] postStartSetup for "custom-flannel-167684" (driver="docker")
	I1216 04:23:26.220652 2128490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:23:26.220722 2128490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:23:26.220763 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:26.238808 2128490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34689 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa Username:docker}
	I1216 04:23:26.335217 2128490 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:23:26.338451 2128490 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:23:26.338480 2128490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:23:26.338493 2128490 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
	I1216 04:23:26.338547 2128490 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
	I1216 04:23:26.338625 2128490 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
	I1216 04:23:26.338730 2128490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 04:23:26.346283 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:23:26.363448 2128490 start.go:296] duration metric: took 142.755313ms for postStartSetup
	I1216 04:23:26.363839 2128490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-167684
	I1216 04:23:26.380451 2128490 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/config.json ...
	I1216 04:23:26.380744 2128490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:23:26.380795 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:26.397811 2128490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34689 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa Username:docker}
	I1216 04:23:26.496890 2128490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:23:26.501671 2128490 start.go:128] duration metric: took 10.309827939s to createHost
	I1216 04:23:26.501705 2128490 start.go:83] releasing machines lock for "custom-flannel-167684", held for 10.309965422s
	I1216 04:23:26.501791 2128490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-167684
	I1216 04:23:26.518628 2128490 ssh_runner.go:195] Run: cat /version.json
	I1216 04:23:26.518658 2128490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:23:26.518700 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:26.518726 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:26.537546 2128490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34689 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa Username:docker}
	I1216 04:23:26.541606 2128490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34689 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa Username:docker}
	I1216 04:23:26.756830 2128490 ssh_runner.go:195] Run: systemctl --version
	I1216 04:23:26.764064 2128490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:23:26.768466 2128490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:23:26.768561 2128490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:23:26.797813 2128490 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:23:26.797837 2128490 start.go:496] detecting cgroup driver to use...
	I1216 04:23:26.797869 2128490 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:23:26.797946 2128490 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 04:23:26.813399 2128490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:23:26.826750 2128490 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:23:26.826823 2128490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:23:26.844765 2128490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:23:26.863377 2128490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:23:26.982885 2128490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:23:27.109340 2128490 docker.go:234] disabling docker service ...
	I1216 04:23:27.109405 2128490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:23:27.131354 2128490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:23:27.144666 2128490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:23:27.267990 2128490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:23:27.390669 2128490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:23:27.404212 2128490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:23:27.418278 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:23:27.427688 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:23:27.438255 2128490 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:23:27.438326 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:23:27.448707 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:23:27.460351 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:23:27.470152 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:23:27.479997 2128490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:23:27.489520 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:23:27.499611 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:23:27.509891 2128490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:23:27.520156 2128490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:23:27.528907 2128490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:23:27.537090 2128490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:23:27.653808 2128490 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:23:27.802731 2128490 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1216 04:23:27.802834 2128490 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1216 04:23:27.806838 2128490 start.go:564] Will wait 60s for crictl version
	I1216 04:23:27.806906 2128490 ssh_runner.go:195] Run: which crictl
	I1216 04:23:27.810619 2128490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:23:27.835604 2128490 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1216 04:23:27.835689 2128490 ssh_runner.go:195] Run: containerd --version
	I1216 04:23:27.862670 2128490 ssh_runner.go:195] Run: containerd --version
	I1216 04:23:27.889766 2128490 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1216 04:23:27.892785 2128490 cli_runner.go:164] Run: docker network inspect custom-flannel-167684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:23:27.909021 2128490 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 04:23:27.912963 2128490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:23:27.922932 2128490 kubeadm.go:884] updating cluster {Name:custom-flannel-167684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-167684 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:23:27.923074 2128490 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1216 04:23:27.923143 2128490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:23:27.953774 2128490 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:23:27.953799 2128490 containerd.go:534] Images already preloaded, skipping extraction
	I1216 04:23:27.953865 2128490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:23:27.978504 2128490 containerd.go:627] all images are preloaded for containerd runtime.
	I1216 04:23:27.978525 2128490 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:23:27.978533 2128490 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1216 04:23:27.978625 2128490 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-167684 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-167684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1216 04:23:27.978690 2128490 ssh_runner.go:195] Run: sudo crictl info
	I1216 04:23:28.009848 2128490 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1216 04:23:28.009980 2128490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:23:28.010039 2128490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-167684 NodeName:custom-flannel-167684 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:23:28.010226 2128490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "custom-flannel-167684"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:23:28.010352 2128490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 04:23:28.020588 2128490 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:23:28.020709 2128490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:23:28.029624 2128490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1216 04:23:28.046419 2128490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 04:23:28.063114 2128490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1216 04:23:28.078624 2128490 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:23:28.084245 2128490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:23:28.094919 2128490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:23:28.227528 2128490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:23:28.246334 2128490 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684 for IP: 192.168.76.2
	I1216 04:23:28.246398 2128490 certs.go:195] generating shared ca certs ...
	I1216 04:23:28.246428 2128490 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:28.246600 2128490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
	I1216 04:23:28.246687 2128490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
	I1216 04:23:28.246722 2128490 certs.go:257] generating profile certs ...
	I1216 04:23:28.246799 2128490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/client.key
	I1216 04:23:28.246836 2128490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/client.crt with IP's: []
	I1216 04:23:28.333051 2128490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/client.crt ...
	I1216 04:23:28.333128 2128490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/client.crt: {Name:mk217c9380adde3bc5d89c35b76204ffe9fca5d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:28.333375 2128490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/client.key ...
	I1216 04:23:28.333409 2128490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/client.key: {Name:mkc971c60012619c1507a2736ee18282f5119ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:28.333585 2128490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.key.80562dd4
	I1216 04:23:28.333628 2128490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.crt.80562dd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 04:23:29.176294 2128490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.crt.80562dd4 ...
	I1216 04:23:29.176346 2128490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.crt.80562dd4: {Name:mkdb1761ecba2154f95d98b460ee14eafeb0cc6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:29.176547 2128490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.key.80562dd4 ...
	I1216 04:23:29.176571 2128490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.key.80562dd4: {Name:mk6a088e8f12c14fc01a372585d142962241b9d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:29.176662 2128490 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.crt.80562dd4 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.crt
	I1216 04:23:29.176748 2128490 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.key.80562dd4 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.key
	I1216 04:23:29.176808 2128490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/proxy-client.key
	I1216 04:23:29.176828 2128490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/proxy-client.crt with IP's: []
	I1216 04:23:29.348541 2128490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/proxy-client.crt ...
	I1216 04:23:29.348580 2128490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/proxy-client.crt: {Name:mkf3136f721856480c39dd8f7299b464626e77e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:29.348761 2128490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/proxy-client.key ...
	I1216 04:23:29.348776 2128490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/proxy-client.key: {Name:mk154eba0db16bd1e459797f85ee14198cf2004e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:29.348968 2128490 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
	W1216 04:23:29.349016 2128490 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
	I1216 04:23:29.349031 2128490 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:23:29.349097 2128490 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:23:29.349129 2128490 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:23:29.349159 2128490 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
	I1216 04:23:29.349212 2128490 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
	I1216 04:23:29.349786 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:23:29.369648 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:23:29.387269 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:23:29.405400 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:23:29.422473 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 04:23:29.439640 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:23:29.456630 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:23:29.473795 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/custom-flannel-167684/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:23:29.492168 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:23:29.509463 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
	I1216 04:23:29.526732 2128490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
	I1216 04:23:29.544267 2128490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:23:29.557047 2128490 ssh_runner.go:195] Run: openssl version
	I1216 04:23:29.566061 2128490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
	I1216 04:23:29.574010 2128490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
	I1216 04:23:29.582168 2128490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
	I1216 04:23:29.586881 2128490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
	I1216 04:23:29.586992 2128490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
	I1216 04:23:29.629868 2128490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:23:29.637818 2128490 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
	I1216 04:23:29.645762 2128490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
	I1216 04:23:29.653583 2128490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
	I1216 04:23:29.661458 2128490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
	I1216 04:23:29.665174 2128490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
	I1216 04:23:29.665285 2128490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
	I1216 04:23:29.719892 2128490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:23:29.736455 2128490 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:23:29.754311 2128490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:23:29.763240 2128490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:23:29.772230 2128490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:23:29.777155 2128490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:23:29.777281 2128490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:23:29.821407 2128490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:23:29.829671 2128490 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:23:29.837043 2128490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:23:29.840978 2128490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:23:29.841059 2128490 kubeadm.go:401] StartCluster: {Name:custom-flannel-167684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-167684 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Dis
ableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:23:29.841157 2128490 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1216 04:23:29.841224 2128490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:23:29.868307 2128490 cri.go:89] found id: ""
	I1216 04:23:29.868375 2128490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:23:29.876390 2128490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:23:29.884253 2128490 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:23:29.884318 2128490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:23:29.892636 2128490 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:23:29.892669 2128490 kubeadm.go:158] found existing configuration files:
	
	I1216 04:23:29.892739 2128490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:23:29.900738 2128490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:23:29.900817 2128490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:23:29.908508 2128490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:23:29.917107 2128490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:23:29.917186 2128490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:23:29.925375 2128490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:23:29.933485 2128490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:23:29.933600 2128490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:23:29.941397 2128490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:23:29.949455 2128490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:23:29.949548 2128490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:23:29.957237 2128490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:23:30.043819 2128490 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 04:23:30.044103 2128490 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:23:30.117127 2128490 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:23:46.277288 2128490 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 04:23:46.277348 2128490 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:23:46.277436 2128490 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:23:46.277491 2128490 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:23:46.277526 2128490 kubeadm.go:319] OS: Linux
	I1216 04:23:46.277570 2128490 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:23:46.277618 2128490 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:23:46.277678 2128490 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:23:46.277728 2128490 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:23:46.277776 2128490 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:23:46.277833 2128490 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:23:46.277879 2128490 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:23:46.277927 2128490 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:23:46.277976 2128490 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:23:46.278048 2128490 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:23:46.278141 2128490 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:23:46.278231 2128490 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:23:46.278293 2128490 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:23:46.281384 2128490 out.go:252]   - Generating certificates and keys ...
	I1216 04:23:46.281494 2128490 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:23:46.281590 2128490 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:23:46.281674 2128490 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:23:46.281738 2128490 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:23:46.281810 2128490 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:23:46.281864 2128490 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:23:46.281937 2128490 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:23:46.282084 2128490 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-167684 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:23:46.282143 2128490 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:23:46.282282 2128490 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-167684 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 04:23:46.282352 2128490 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:23:46.282431 2128490 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:23:46.282483 2128490 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:23:46.282544 2128490 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:23:46.282600 2128490 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:23:46.282661 2128490 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:23:46.282718 2128490 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:23:46.282784 2128490 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:23:46.282843 2128490 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:23:46.282940 2128490 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:23:46.283010 2128490 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:23:46.288116 2128490 out.go:252]   - Booting up control plane ...
	I1216 04:23:46.288277 2128490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:23:46.288372 2128490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:23:46.288455 2128490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:23:46.288643 2128490 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:23:46.288760 2128490 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:23:46.288881 2128490 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:23:46.289008 2128490 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:23:46.289099 2128490 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:23:46.289263 2128490 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:23:46.289386 2128490 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:23:46.289455 2128490 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501708896s
	I1216 04:23:46.289566 2128490 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 04:23:46.289688 2128490 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1216 04:23:46.289804 2128490 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 04:23:46.289927 2128490 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 04:23:46.290060 2128490 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.048478517s
	I1216 04:23:46.290140 2128490 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.443751152s
	I1216 04:23:46.290220 2128490 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501821992s
	I1216 04:23:46.290352 2128490 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 04:23:46.290486 2128490 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 04:23:46.290561 2128490 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 04:23:46.290775 2128490 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-167684 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 04:23:46.290837 2128490 kubeadm.go:319] [bootstrap-token] Using token: 8qcjm0.9p689aq9pym3nf04
	I1216 04:23:46.293945 2128490 out.go:252]   - Configuring RBAC rules ...
	I1216 04:23:46.294073 2128490 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 04:23:46.294232 2128490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 04:23:46.294392 2128490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 04:23:46.294561 2128490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 04:23:46.294711 2128490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 04:23:46.294812 2128490 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 04:23:46.294956 2128490 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 04:23:46.295008 2128490 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 04:23:46.295098 2128490 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 04:23:46.295114 2128490 kubeadm.go:319] 
	I1216 04:23:46.295188 2128490 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 04:23:46.295202 2128490 kubeadm.go:319] 
	I1216 04:23:46.295287 2128490 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 04:23:46.295297 2128490 kubeadm.go:319] 
	I1216 04:23:46.295322 2128490 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 04:23:46.295385 2128490 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 04:23:46.295440 2128490 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 04:23:46.295448 2128490 kubeadm.go:319] 
	I1216 04:23:46.295502 2128490 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 04:23:46.295506 2128490 kubeadm.go:319] 
	I1216 04:23:46.295561 2128490 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 04:23:46.295565 2128490 kubeadm.go:319] 
	I1216 04:23:46.295617 2128490 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 04:23:46.295692 2128490 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 04:23:46.295760 2128490 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 04:23:46.295763 2128490 kubeadm.go:319] 
	I1216 04:23:46.295848 2128490 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 04:23:46.295924 2128490 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 04:23:46.295928 2128490 kubeadm.go:319] 
	I1216 04:23:46.296019 2128490 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8qcjm0.9p689aq9pym3nf04 \
	I1216 04:23:46.296122 2128490 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5ff7898403cb7d1c6cd652105c589f920cbc34cc5b43666798ad823c7f84bffc \
	I1216 04:23:46.296142 2128490 kubeadm.go:319] 	--control-plane 
	I1216 04:23:46.296150 2128490 kubeadm.go:319] 
	I1216 04:23:46.296235 2128490 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 04:23:46.296239 2128490 kubeadm.go:319] 
	I1216 04:23:46.296321 2128490 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8qcjm0.9p689aq9pym3nf04 \
	I1216 04:23:46.296437 2128490 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5ff7898403cb7d1c6cd652105c589f920cbc34cc5b43666798ad823c7f84bffc 
	I1216 04:23:46.296445 2128490 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1216 04:23:46.299596 2128490 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1216 04:23:46.302431 2128490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 04:23:46.302536 2128490 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1216 04:23:46.307179 2128490 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1216 04:23:46.307225 2128490 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1216 04:23:46.327033 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 04:23:46.814269 2128490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 04:23:46.814403 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:46.814477 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-167684 minikube.k8s.io/updated_at=2025_12_16T04_23_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=custom-flannel-167684 minikube.k8s.io/primary=true
	I1216 04:23:47.030431 2128490 ops.go:34] apiserver oom_adj: -16
	I1216 04:23:47.030537 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:47.530673 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:48.031261 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:48.531189 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:49.030618 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:49.531235 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:50.030842 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:50.531033 2128490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:23:50.643604 2128490 kubeadm.go:1114] duration metric: took 3.829245754s to wait for elevateKubeSystemPrivileges
	I1216 04:23:50.643636 2128490 kubeadm.go:403] duration metric: took 20.802608733s to StartCluster
	I1216 04:23:50.643654 2128490 settings.go:142] acquiring lock: {Name:mk97640b31ca87fdb64d334c0fdba28034d282ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:50.643737 2128490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 04:23:50.644723 2128490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/kubeconfig: {Name:mk08db546f5f9e39bccc559fd0481ec56ebdc750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:23:50.644976 2128490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 04:23:50.644973 2128490 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1216 04:23:50.645237 2128490 config.go:182] Loaded profile config "custom-flannel-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 04:23:50.645286 2128490 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:23:50.645350 2128490 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-167684"
	I1216 04:23:50.645365 2128490 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-167684"
	I1216 04:23:50.645388 2128490 host.go:66] Checking if "custom-flannel-167684" exists ...
	I1216 04:23:50.645851 2128490 cli_runner.go:164] Run: docker container inspect custom-flannel-167684 --format={{.State.Status}}
	I1216 04:23:50.646260 2128490 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-167684"
	I1216 04:23:50.646285 2128490 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-167684"
	I1216 04:23:50.646573 2128490 cli_runner.go:164] Run: docker container inspect custom-flannel-167684 --format={{.State.Status}}
	I1216 04:23:50.652634 2128490 out.go:179] * Verifying Kubernetes components...
	I1216 04:23:50.663210 2128490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:23:50.677394 2128490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:23:50.680384 2128490 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:23:50.680408 2128490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:23:50.680485 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:50.702832 2128490 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-167684"
	I1216 04:23:50.702872 2128490 host.go:66] Checking if "custom-flannel-167684" exists ...
	I1216 04:23:50.708815 2128490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34689 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa Username:docker}
	I1216 04:23:50.710160 2128490 cli_runner.go:164] Run: docker container inspect custom-flannel-167684 --format={{.State.Status}}
	I1216 04:23:50.740639 2128490 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:23:50.740663 2128490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:23:50.740735 2128490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-167684
	I1216 04:23:50.770566 2128490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34689 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/custom-flannel-167684/id_rsa Username:docker}
	I1216 04:23:51.095001 2128490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:23:51.183634 2128490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 04:23:51.183777 2128490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:23:51.200746 2128490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:23:52.148221 2128490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.053129922s)
	I1216 04:23:52.149136 2128490 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-167684" to be "Ready" ...
	I1216 04:23:52.149465 2128490 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1216 04:23:52.222434 2128490 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 04:23:52.224873 2128490 addons.go:530] duration metric: took 1.579576203s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 04:23:52.653465 2128490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-167684" context rescaled to 1 replicas
	W1216 04:23:54.152927 2128490 node_ready.go:57] node "custom-flannel-167684" has "Ready":"False" status (will retry)
	I1216 04:23:54.653993 2128490 node_ready.go:49] node "custom-flannel-167684" is "Ready"
	I1216 04:23:54.654028 2128490 node_ready.go:38] duration metric: took 2.504850438s for node "custom-flannel-167684" to be "Ready" ...
	I1216 04:23:54.654041 2128490 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:23:54.654105 2128490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:23:54.670382 2128490 api_server.go:72] duration metric: took 4.025380154s to wait for apiserver process to appear ...
	I1216 04:23:54.670415 2128490 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:23:54.670436 2128490 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 04:23:54.683025 2128490 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 04:23:54.684602 2128490 api_server.go:141] control plane version: v1.34.2
	I1216 04:23:54.684633 2128490 api_server.go:131] duration metric: took 14.210978ms to wait for apiserver health ...
	I1216 04:23:54.684644 2128490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 04:23:54.689437 2128490 system_pods.go:59] 7 kube-system pods found
	I1216 04:23:54.689476 2128490 system_pods.go:61] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:54.689483 2128490 system_pods.go:61] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:54.689490 2128490 system_pods.go:61] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:54.689498 2128490 system_pods.go:61] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 04:23:54.689503 2128490 system_pods.go:61] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:54.689508 2128490 system_pods.go:61] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 04:23:54.689518 2128490 system_pods.go:61] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:23:54.689525 2128490 system_pods.go:74] duration metric: took 4.875932ms to wait for pod list to return data ...
	I1216 04:23:54.689533 2128490 default_sa.go:34] waiting for default service account to be created ...
	I1216 04:23:54.692254 2128490 default_sa.go:45] found service account: "default"
	I1216 04:23:54.692278 2128490 default_sa.go:55] duration metric: took 2.739998ms for default service account to be created ...
	I1216 04:23:54.692288 2128490 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 04:23:54.695152 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:54.695190 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:54.695198 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:54.695207 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:54.695228 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 04:23:54.695237 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:54.695246 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 04:23:54.695253 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:23:54.695274 2128490 retry.go:31] will retry after 218.257536ms: missing components: kube-dns
	I1216 04:23:54.924672 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:54.924712 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:54.924720 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:54.924759 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:54.924780 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 04:23:54.924788 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:54.924799 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:23:54.924806 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:23:54.924835 2128490 retry.go:31] will retry after 324.254736ms: missing components: kube-dns
	I1216 04:23:55.253356 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:55.253391 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:55.253399 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:55.253406 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:55.253416 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 04:23:55.253421 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:55.253431 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:23:55.253439 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:23:55.253464 2128490 retry.go:31] will retry after 333.147384ms: missing components: kube-dns
	I1216 04:23:55.590244 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:55.590279 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:55.590287 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:55.590299 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:55.590306 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 04:23:55.590311 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:55.590316 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:23:55.590328 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:23:55.590347 2128490 retry.go:31] will retry after 406.508132ms: missing components: kube-dns
	I1216 04:23:56.003944 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:56.003990 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:56.003998 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:56.004006 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:56.004013 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 04:23:56.004019 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:56.004024 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:23:56.004028 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:23:56.004045 2128490 retry.go:31] will retry after 719.010983ms: missing components: kube-dns
	I1216 04:23:56.727196 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:56.727229 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:56.727236 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:56.727243 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:56.727249 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 04:23:56.727253 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:56.727258 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:23:56.727262 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:23:56.727277 2128490 retry.go:31] will retry after 597.08105ms: missing components: kube-dns
	I1216 04:23:57.327799 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:57.327839 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:57.327847 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:57.327854 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:57.327862 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 04:23:57.327866 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:57.327871 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:23:57.327875 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:23:57.327896 2128490 retry.go:31] will retry after 991.489946ms: missing components: kube-dns
	I1216 04:23:58.323281 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:58.323320 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:58.323328 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:58.323334 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:58.323339 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running
	I1216 04:23:58.323343 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:58.323355 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:23:58.323359 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:23:58.323374 2128490 retry.go:31] will retry after 1.160334636s: missing components: kube-dns
	I1216 04:23:59.487423 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:23:59.487464 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:23:59.487472 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:23:59.487479 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:23:59.487484 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running
	I1216 04:23:59.487488 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:23:59.487492 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:23:59.487496 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:23:59.487511 2128490 retry.go:31] will retry after 1.747188468s: missing components: kube-dns
	I1216 04:24:01.239012 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:24:01.239069 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:24:01.239079 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:24:01.239086 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:24:01.239092 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running
	I1216 04:24:01.239096 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:24:01.239100 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:24:01.239103 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:24:01.239117 2128490 retry.go:31] will retry after 1.69488994s: missing components: kube-dns
	I1216 04:24:02.938231 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:24:02.938267 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:24:02.938279 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:24:02.938288 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:24:02.938293 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running
	I1216 04:24:02.938297 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:24:02.938302 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:24:02.938314 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:24:02.938330 2128490 retry.go:31] will retry after 1.882310607s: missing components: kube-dns
	I1216 04:24:04.826030 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:24:04.826077 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:24:04.826084 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:24:04.826092 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:24:04.826097 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running
	I1216 04:24:04.826101 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:24:04.826106 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:24:04.826110 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:24:04.826125 2128490 retry.go:31] will retry after 2.478672366s: missing components: kube-dns
	I1216 04:24:07.308430 2128490 system_pods.go:86] 7 kube-system pods found
	I1216 04:24:07.308461 2128490 system_pods.go:89] "coredns-66bc5c9577-klnd9" [52faf10f-26fd-4359-a236-1986ec9d26f9] Running
	I1216 04:24:07.308468 2128490 system_pods.go:89] "etcd-custom-flannel-167684" [6d61219f-3241-4472-b76e-6859bcee0594] Running
	I1216 04:24:07.308473 2128490 system_pods.go:89] "kube-apiserver-custom-flannel-167684" [71cddd35-6389-4986-966b-ee60b77c6ce5] Running
	I1216 04:24:07.308478 2128490 system_pods.go:89] "kube-controller-manager-custom-flannel-167684" [7467c892-0e45-4b2c-a0fc-63d866638698] Running
	I1216 04:24:07.308487 2128490 system_pods.go:89] "kube-proxy-jbspv" [ce5bca8b-dda6-4daf-a716-798b593138ec] Running
	I1216 04:24:07.308492 2128490 system_pods.go:89] "kube-scheduler-custom-flannel-167684" [687bdf39-74c1-4d62-9387-ff335ad0bdda] Running
	I1216 04:24:07.308497 2128490 system_pods.go:89] "storage-provisioner" [c1c058c0-c995-4384-9487-205d11d66b76] Running
	I1216 04:24:07.308506 2128490 system_pods.go:126] duration metric: took 12.616211427s to wait for k8s-apps to be running ...
	I1216 04:24:07.308520 2128490 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 04:24:07.308584 2128490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:24:07.321621 2128490 system_svc.go:56] duration metric: took 13.091955ms WaitForService to wait for kubelet
	I1216 04:24:07.321652 2128490 kubeadm.go:587] duration metric: took 16.676653268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:24:07.321672 2128490 node_conditions.go:102] verifying NodePressure condition ...
	I1216 04:24:07.324305 2128490 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 04:24:07.324341 2128490 node_conditions.go:123] node cpu capacity is 2
	I1216 04:24:07.324355 2128490 node_conditions.go:105] duration metric: took 2.679085ms to run NodePressure ...
	I1216 04:24:07.324369 2128490 start.go:242] waiting for startup goroutines ...
	I1216 04:24:07.324377 2128490 start.go:247] waiting for cluster config update ...
	I1216 04:24:07.324389 2128490 start.go:256] writing updated cluster config ...
	I1216 04:24:07.324694 2128490 ssh_runner.go:195] Run: rm -f paused
	I1216 04:24:07.328484 2128490 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:24:07.332034 2128490 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-klnd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:07.336903 2128490 pod_ready.go:94] pod "coredns-66bc5c9577-klnd9" is "Ready"
	I1216 04:24:07.336931 2128490 pod_ready.go:86] duration metric: took 4.868835ms for pod "coredns-66bc5c9577-klnd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:07.339746 2128490 pod_ready.go:83] waiting for pod "etcd-custom-flannel-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:07.344722 2128490 pod_ready.go:94] pod "etcd-custom-flannel-167684" is "Ready"
	I1216 04:24:07.344753 2128490 pod_ready.go:86] duration metric: took 4.976279ms for pod "etcd-custom-flannel-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:07.347399 2128490 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:07.352013 2128490 pod_ready.go:94] pod "kube-apiserver-custom-flannel-167684" is "Ready"
	I1216 04:24:07.352045 2128490 pod_ready.go:86] duration metric: took 4.620972ms for pod "kube-apiserver-custom-flannel-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:07.354364 2128490 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:07.732512 2128490 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-167684" is "Ready"
	I1216 04:24:07.732544 2128490 pod_ready.go:86] duration metric: took 378.153076ms for pod "kube-controller-manager-custom-flannel-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:07.932998 2128490 pod_ready.go:83] waiting for pod "kube-proxy-jbspv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:08.332473 2128490 pod_ready.go:94] pod "kube-proxy-jbspv" is "Ready"
	I1216 04:24:08.332501 2128490 pod_ready.go:86] duration metric: took 399.472058ms for pod "kube-proxy-jbspv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:08.532713 2128490 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:08.932113 2128490 pod_ready.go:94] pod "kube-scheduler-custom-flannel-167684" is "Ready"
	I1216 04:24:08.932142 2128490 pod_ready.go:86] duration metric: took 399.400019ms for pod "kube-scheduler-custom-flannel-167684" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:24:08.932156 2128490 pod_ready.go:40] duration metric: took 1.603639166s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:24:08.981161 2128490 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 04:24:08.984609 2128490 out.go:179] * Done! kubectl is now configured to use "custom-flannel-167684" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207708993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207730006Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207767502Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207785201Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207802406Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207818545Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207829860Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207852456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207871098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.207912377Z" level=info msg="Connect containerd service"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.208248542Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.209178851Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.222779090Z" level=info msg="Start subscribing containerd event"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223109158Z" level=info msg="Start recovering state"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223239174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.223305749Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252281513Z" level=info msg="Start event monitor"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252522812Z" level=info msg="Start cni network conf syncer for default"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252604139Z" level=info msg="Start streaming server"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252676581Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252754257Z" level=info msg="runtime interface starting up..."
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.252816426Z" level=info msg="starting plugins..."
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.253604035Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:04:42 no-preload-255023 containerd[555]: time="2025-12-16T04:04:42.254042786Z" level=info msg="containerd successfully booted in 0.075355s"
	Dec 16 04:04:42 no-preload-255023 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:24:15.710497   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:24:15.711138   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:24:15.712627   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:24:15.713011   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 04:24:15.714532   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:39] overlayfs: idmapped layers are currently not supported
	[Dec16 01:41] overlayfs: idmapped layers are currently not supported
	[Dec16 01:52] overlayfs: idmapped layers are currently not supported
	[Dec16 01:53] overlayfs: idmapped layers are currently not supported
	[Dec16 01:54] overlayfs: idmapped layers are currently not supported
	[  +4.093900] overlayfs: idmapped layers are currently not supported
	[Dec16 01:55] overlayfs: idmapped layers are currently not supported
	[Dec16 01:56] overlayfs: idmapped layers are currently not supported
	[Dec16 01:57] overlayfs: idmapped layers are currently not supported
	[Dec16 01:58] overlayfs: idmapped layers are currently not supported
	[  +0.991766] overlayfs: idmapped layers are currently not supported
	[Dec16 02:00] overlayfs: idmapped layers are currently not supported
	[  +1.213477] overlayfs: idmapped layers are currently not supported
	[Dec16 02:01] overlayfs: idmapped layers are currently not supported
	[Dec16 02:18] overlayfs: idmapped layers are currently not supported
	[Dec16 02:20] overlayfs: idmapped layers are currently not supported
	[Dec16 02:22] overlayfs: idmapped layers are currently not supported
	[Dec16 02:24] overlayfs: idmapped layers are currently not supported
	[Dec16 02:25] overlayfs: idmapped layers are currently not supported
	[Dec16 02:27] overlayfs: idmapped layers are currently not supported
	[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:24:15 up 10:06,  0 user,  load average: 1.91, 1.57, 1.35
	Linux no-preload-255023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:24:12 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:24:13 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1560.
	Dec 16 04:24:13 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:24:13 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:24:13 no-preload-255023 kubelet[10126]: E1216 04:24:13.236125   10126 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:24:13 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:24:13 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:24:13 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1561.
	Dec 16 04:24:13 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:24:13 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:24:13 no-preload-255023 kubelet[10132]: E1216 04:24:13.983245   10132 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:24:13 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:24:13 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:24:14 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1562.
	Dec 16 04:24:14 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:24:14 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:24:14 no-preload-255023 kubelet[10152]: E1216 04:24:14.739524   10152 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:24:14 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:24:14 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:24:15 no-preload-255023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1563.
	Dec 16 04:24:15 no-preload-255023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:24:15 no-preload-255023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:24:15 no-preload-255023 kubelet[10211]: E1216 04:24:15.504572   10211 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:24:15 no-preload-255023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:24:15 no-preload-255023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-255023 -n no-preload-255023: exit status 2 (410.541086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-255023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (268.05s)
E1216 04:26:44.182619 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:08.369303 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:19.096047 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:25.144249 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:36.572790 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:36.579247 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:36.590681 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:36.612016 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:36.653423 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:36.734880 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:36.896497 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:37.218171 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:37.860122 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:39.141462 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:27:41.703745 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 20.58
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 8.14
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 9.79
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.16
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.25
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.25
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 165.79
38 TestAddons/serial/Volcano 39.75
40 TestAddons/serial/GCPAuth/Namespaces 0.17
41 TestAddons/serial/GCPAuth/FakeCredentials 10.83
44 TestAddons/parallel/Registry 15.55
45 TestAddons/parallel/RegistryCreds 0.73
46 TestAddons/parallel/Ingress 18.02
47 TestAddons/parallel/InspektorGadget 10.8
48 TestAddons/parallel/MetricsServer 6.93
50 TestAddons/parallel/CSI 63.77
51 TestAddons/parallel/Headlamp 17.43
52 TestAddons/parallel/CloudSpanner 6.78
53 TestAddons/parallel/LocalPath 52.93
54 TestAddons/parallel/NvidiaDevicePlugin 5.64
55 TestAddons/parallel/Yakd 11.86
57 TestAddons/StoppedEnableDisable 12.36
58 TestCertOptions 43.64
59 TestCertExpiration 222.02
61 TestForceSystemdFlag 32.16
62 TestForceSystemdEnv 37.37
63 TestDockerEnvContainerd 46.38
67 TestErrorSpam/setup 30.75
68 TestErrorSpam/start 0.82
69 TestErrorSpam/status 1.14
70 TestErrorSpam/pause 1.9
71 TestErrorSpam/unpause 1.74
72 TestErrorSpam/stop 1.63
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 75.97
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.32
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.67
84 TestFunctional/serial/CacheCmd/cache/add_local 1.3
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.15
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
92 TestFunctional/serial/ExtraConfig 47.53
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.68
96 TestFunctional/serial/InvalidService 4.15
98 TestFunctional/parallel/ConfigCmd 0.44
99 TestFunctional/parallel/DashboardCmd 7.36
100 TestFunctional/parallel/DryRun 0.54
101 TestFunctional/parallel/InternationalLanguage 0.22
102 TestFunctional/parallel/StatusCmd 1.03
106 TestFunctional/parallel/ServiceCmdConnect 8.61
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 22.29
110 TestFunctional/parallel/SSHCmd 0.89
111 TestFunctional/parallel/CpCmd 2.45
113 TestFunctional/parallel/FileSync 0.48
114 TestFunctional/parallel/CertSync 1.74
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.91
122 TestFunctional/parallel/License 0.37
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
125 TestFunctional/parallel/Version/short 0.08
126 TestFunctional/parallel/Version/components 1.39
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.41
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
134 TestFunctional/parallel/ImageCommands/ImageBuild 4.05
135 TestFunctional/parallel/ImageCommands/Setup 0.7
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.18
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/MountCmd/any-port 8.33
153 TestFunctional/parallel/MountCmd/specific-port 1.99
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
155 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
157 TestFunctional/parallel/ProfileCmd/profile_list 0.43
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
159 TestFunctional/parallel/ServiceCmd/List 1.45
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.34
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
162 TestFunctional/parallel/ServiceCmd/Format 0.51
163 TestFunctional/parallel/ServiceCmd/URL 0.46
164 TestFunctional/delete_echo-server_images 0.05
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.07
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.37
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.04
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.91
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.94
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.03
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.56
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.45
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.68
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.42
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.7
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.56
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.29
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.42
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.65
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.17
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.52
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.25
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.27
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.12
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.08
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.35
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.65
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.18
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 179.45
265 TestMultiControlPlane/serial/DeployApp 7.69
266 TestMultiControlPlane/serial/PingHostFromPods 1.7
267 TestMultiControlPlane/serial/AddWorkerNode 59.74
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
270 TestMultiControlPlane/serial/CopyFile 21
271 TestMultiControlPlane/serial/StopSecondaryNode 13.03
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
273 TestMultiControlPlane/serial/RestartSecondaryNode 14.46
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 100.57
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.13
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
278 TestMultiControlPlane/serial/StopCluster 36.52
279 TestMultiControlPlane/serial/RestartCluster 60.07
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
281 TestMultiControlPlane/serial/AddSecondaryNode 54.18
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
287 TestJSONOutput/start/Command 81.81
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.89
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.64
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.01
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 40.58
313 TestKicCustomNetwork/use_default_bridge_network 38.23
314 TestKicExistingNetwork 34.9
315 TestKicCustomSubnet 35.44
316 TestKicStaticIP 34.59
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 69.77
321 TestMountStart/serial/StartWithMountFirst 8.35
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.41
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.54
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 109.38
333 TestMultiNode/serial/DeployApp2Nodes 5.3
334 TestMultiNode/serial/PingHostFrom2Pods 1
335 TestMultiNode/serial/AddNode 27.79
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.47
339 TestMultiNode/serial/StopNode 2.42
340 TestMultiNode/serial/StartAfterStop 8.22
341 TestMultiNode/serial/RestartKeepsNodes 78
342 TestMultiNode/serial/DeleteNode 5.92
343 TestMultiNode/serial/StopMultiNode 24.08
344 TestMultiNode/serial/RestartMultiNode 50.58
345 TestMultiNode/serial/ValidateNameConflict 35.89
350 TestPreload 121.92
352 TestScheduledStopUnix 108.25
355 TestInsufficientStorage 12.6
356 TestRunningBinaryUpgrade 63
359 TestMissingContainerUpgrade 143.92
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 41.93
363 TestNoKubernetes/serial/StartWithStopK8s 25.28
364 TestNoKubernetes/serial/Start 7.57
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
367 TestNoKubernetes/serial/ProfileList 0.7
368 TestNoKubernetes/serial/Stop 1.29
369 TestNoKubernetes/serial/StartNoArgs 6.75
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
371 TestStoppedBinaryUpgrade/Setup 4.92
372 TestStoppedBinaryUpgrade/Upgrade 299.14
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.12
382 TestPause/serial/Start 51.68
383 TestPause/serial/SecondStartNoReconfiguration 6.28
384 TestPause/serial/Pause 0.77
385 TestPause/serial/VerifyStatus 0.33
386 TestPause/serial/Unpause 0.62
387 TestPause/serial/PauseAgain 0.85
388 TestPause/serial/DeletePaused 2.79
389 TestPause/serial/VerifyDeletedResources 0.38
397 TestNetworkPlugins/group/false 3.63
402 TestStartStop/group/old-k8s-version/serial/FirstStart 69.01
405 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
407 TestStartStop/group/old-k8s-version/serial/Stop 12.11
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
409 TestStartStop/group/old-k8s-version/serial/SecondStart 56.74
410 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
412 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
413 TestStartStop/group/old-k8s-version/serial/Pause 3.09
415 TestStartStop/group/embed-certs/serial/FirstStart 50.72
416 TestStartStop/group/embed-certs/serial/DeployApp 9.34
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
418 TestStartStop/group/embed-certs/serial/Stop 12.09
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/embed-certs/serial/SecondStart 49.91
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
424 TestStartStop/group/embed-certs/serial/Pause 3.16
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.53
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.65
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.15
440 TestStartStop/group/no-preload/serial/Stop 1.31
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
443 TestStartStop/group/newest-cni/serial/DeployApp 0
446 TestStartStop/group/newest-cni/serial/Stop 1.31
447 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
453 TestNetworkPlugins/group/auto/Start 78.64
454 TestNetworkPlugins/group/auto/KubeletFlags 0.31
455 TestNetworkPlugins/group/auto/NetCatPod 8.31
456 TestNetworkPlugins/group/auto/DNS 0.17
457 TestNetworkPlugins/group/auto/Localhost 0.16
458 TestNetworkPlugins/group/auto/HairPin 0.15
460 TestNetworkPlugins/group/flannel/Start 58.66
461 TestNetworkPlugins/group/flannel/ControllerPod 6.01
462 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
463 TestNetworkPlugins/group/flannel/NetCatPod 8.27
464 TestNetworkPlugins/group/flannel/DNS 0.17
465 TestNetworkPlugins/group/flannel/Localhost 0.16
466 TestNetworkPlugins/group/flannel/HairPin 0.14
467 TestNetworkPlugins/group/calico/Start 57.01
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/KubeletFlags 0.36
470 TestNetworkPlugins/group/calico/NetCatPod 10.35
471 TestNetworkPlugins/group/calico/DNS 0.17
472 TestNetworkPlugins/group/calico/Localhost 0.16
473 TestNetworkPlugins/group/calico/HairPin 0.19
474 TestNetworkPlugins/group/custom-flannel/Start 53.11
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.3
477 TestNetworkPlugins/group/kindnet/Start 89.42
478 TestNetworkPlugins/group/custom-flannel/DNS 0.22
479 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
480 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
481 TestNetworkPlugins/group/bridge/Start 74.19
482 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
483 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
484 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
485 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
486 TestNetworkPlugins/group/bridge/NetCatPod 8.56
487 TestNetworkPlugins/group/kindnet/DNS 0.19
488 TestNetworkPlugins/group/kindnet/Localhost 0.17
489 TestNetworkPlugins/group/kindnet/HairPin 0.15
490 TestNetworkPlugins/group/bridge/DNS 0.18
491 TestNetworkPlugins/group/bridge/Localhost 0.16
492 TestNetworkPlugins/group/bridge/HairPin 0.16
493 TestNetworkPlugins/group/enable-default-cni/Start 72.82
494 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
495 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
496 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
497 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
498 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (20.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-734582 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-734582 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (20.575622044s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (20.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1216 02:30:43.753863 1798370 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1216 02:30:43.753943 1798370 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-734582
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-734582: exit status 85 (87.677811ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-734582 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-734582 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:30:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:30:23.226883 1798376 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:30:23.227014 1798376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:23.227077 1798376 out.go:374] Setting ErrFile to fd 2...
	I1216 02:30:23.227102 1798376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:23.227393 1798376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	W1216 02:30:23.227577 1798376 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22158-1796512/.minikube/config/config.json: open /home/jenkins/minikube-integration/22158-1796512/.minikube/config/config.json: no such file or directory
	I1216 02:30:23.228034 1798376 out.go:368] Setting JSON to true
	I1216 02:30:23.228906 1798376 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":29568,"bootTime":1765822656,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:30:23.229019 1798376 start.go:143] virtualization:  
	I1216 02:30:23.234634 1798376 out.go:99] [download-only-734582] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1216 02:30:23.234883 1798376 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 02:30:23.235013 1798376 notify.go:221] Checking for updates...
	I1216 02:30:23.239290 1798376 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:30:23.242860 1798376 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:30:23.246361 1798376 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:30:23.249787 1798376 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:30:23.253236 1798376 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 02:30:23.259396 1798376 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:30:23.259681 1798376 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:30:23.281545 1798376 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:30:23.281655 1798376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:30:23.348418 1798376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-16 02:30:23.339212731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:30:23.348524 1798376 docker.go:319] overlay module found
	I1216 02:30:23.351758 1798376 out.go:99] Using the docker driver based on user configuration
	I1216 02:30:23.351804 1798376 start.go:309] selected driver: docker
	I1216 02:30:23.351811 1798376 start.go:927] validating driver "docker" against <nil>
	I1216 02:30:23.351935 1798376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:30:23.407924 1798376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-16 02:30:23.398198892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:30:23.408092 1798376 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:30:23.408381 1798376 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 02:30:23.408534 1798376 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:30:23.411736 1798376 out.go:171] Using Docker driver with root privileges
	I1216 02:30:23.414791 1798376 cni.go:84] Creating CNI manager for ""
	I1216 02:30:23.414859 1798376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:30:23.414873 1798376 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 02:30:23.414967 1798376 start.go:353] cluster config:
	{Name:download-only-734582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-734582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:30:23.418107 1798376 out.go:99] Starting "download-only-734582" primary control-plane node in "download-only-734582" cluster
	I1216 02:30:23.418133 1798376 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:30:23.421246 1798376 out.go:99] Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:30:23.421309 1798376 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1216 02:30:23.421391 1798376 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:30:23.437586 1798376 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb to local cache
	I1216 02:30:23.437823 1798376 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory
	I1216 02:30:23.437919 1798376 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb to local cache
	I1216 02:30:23.476418 1798376 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:30:23.476451 1798376 cache.go:65] Caching tarball of preloaded images
	I1216 02:30:23.476665 1798376 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1216 02:30:23.480164 1798376 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1216 02:30:23.480200 1798376 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1216 02:30:23.575624 1798376 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1216 02:30:23.575763 1798376 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:30:28.073164 1798376 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb as a tarball
	I1216 02:30:42.893768 1798376 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1216 02:30:42.894205 1798376 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/download-only-734582/config.json ...
	I1216 02:30:42.894243 1798376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/download-only-734582/config.json: {Name:mk38ac3da51959688ad6876e65ba81b5ae521691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:30:42.894437 1798376 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1216 02:30:42.894632 1798376 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-734582 host does not exist
	  To start a cluster, run: "minikube start -p download-only-734582"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-734582
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (8.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-468692 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-468692 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.142293758s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (8.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1216 02:30:52.355132 1798370 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1216 02:30:52.355174 1798370 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-468692
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-468692: exit status 85 (84.810982ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-734582 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-734582 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │ 16 Dec 25 02:30 UTC │
	│ delete  │ -p download-only-734582                                                                                                                                                               │ download-only-734582 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │ 16 Dec 25 02:30 UTC │
	│ start   │ -o=json --download-only -p download-only-468692 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-468692 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:30:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:30:44.257299 1798576 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:30:44.257448 1798576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:44.257460 1798576 out.go:374] Setting ErrFile to fd 2...
	I1216 02:30:44.257478 1798576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:44.257756 1798576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:30:44.258234 1798576 out.go:368] Setting JSON to true
	I1216 02:30:44.259159 1798576 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":29589,"bootTime":1765822656,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:30:44.259230 1798576 start.go:143] virtualization:  
	I1216 02:30:44.262660 1798576 out.go:99] [download-only-468692] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:30:44.262904 1798576 notify.go:221] Checking for updates...
	I1216 02:30:44.265844 1798576 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:30:44.268798 1798576 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:30:44.271710 1798576 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:30:44.274660 1798576 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:30:44.277555 1798576 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 02:30:44.283288 1798576 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:30:44.283568 1798576 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:30:44.317820 1798576 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:30:44.317934 1798576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:30:44.369413 1798576 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 02:30:44.359227258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:30:44.369525 1798576 docker.go:319] overlay module found
	I1216 02:30:44.372501 1798576 out.go:99] Using the docker driver based on user configuration
	I1216 02:30:44.372544 1798576 start.go:309] selected driver: docker
	I1216 02:30:44.372552 1798576 start.go:927] validating driver "docker" against <nil>
	I1216 02:30:44.372655 1798576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:30:44.427053 1798576 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 02:30:44.417687687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:30:44.427238 1798576 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:30:44.427518 1798576 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 02:30:44.427673 1798576 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:30:44.430826 1798576 out.go:171] Using Docker driver with root privileges
	I1216 02:30:44.433704 1798576 cni.go:84] Creating CNI manager for ""
	I1216 02:30:44.433773 1798576 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:30:44.433782 1798576 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 02:30:44.433864 1798576 start.go:353] cluster config:
	{Name:download-only-468692 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-468692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:30:44.436769 1798576 out.go:99] Starting "download-only-468692" primary control-plane node in "download-only-468692" cluster
	I1216 02:30:44.436787 1798576 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:30:44.439737 1798576 out.go:99] Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:30:44.439773 1798576 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1216 02:30:44.439957 1798576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:30:44.456631 1798576 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb to local cache
	I1216 02:30:44.456793 1798576 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory
	I1216 02:30:44.456817 1798576 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory, skipping pull
	I1216 02:30:44.456823 1798576 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in cache, skipping pull
	I1216 02:30:44.456830 1798576 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb as a tarball
	I1216 02:30:44.493331 1798576 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1216 02:30:44.493358 1798576 cache.go:65] Caching tarball of preloaded images
	I1216 02:30:44.493537 1798576 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1216 02:30:44.496743 1798576 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1216 02:30:44.496776 1798576 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1216 02:30:44.589165 1798576 preload.go:295] Got checksum from GCS API "cd1a05d5493c9270e248bf47fb3f071d"
	I1216 02:30:44.589220 1798576 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:cd1a05d5493c9270e248bf47fb3f071d -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-468692 host does not exist
	  To start a cluster, run: "minikube start -p download-only-468692"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-468692
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (9.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-791072 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-791072 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.789777613s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (9.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1216 02:31:02.584939 1798370 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1216 02:31:02.584995 1798370 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-791072
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-791072: exit status 85 (157.90017ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-734582 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-734582 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │ 16 Dec 25 02:30 UTC │
	│ delete  │ -p download-only-734582                                                                                                                                                                      │ download-only-734582 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │ 16 Dec 25 02:30 UTC │
	│ start   │ -o=json --download-only -p download-only-468692 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-468692 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │ 16 Dec 25 02:30 UTC │
	│ delete  │ -p download-only-468692                                                                                                                                                                      │ download-only-468692 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │ 16 Dec 25 02:30 UTC │
	│ start   │ -o=json --download-only -p download-only-791072 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-791072 │ jenkins │ v1.37.0 │ 16 Dec 25 02:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:30:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:30:52.841501 1798775 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:30:52.841697 1798775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:52.841728 1798775 out.go:374] Setting ErrFile to fd 2...
	I1216 02:30:52.841748 1798775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:30:52.842044 1798775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:30:52.842566 1798775 out.go:368] Setting JSON to true
	I1216 02:30:52.843508 1798775 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":29597,"bootTime":1765822656,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:30:52.843612 1798775 start.go:143] virtualization:  
	I1216 02:30:52.847090 1798775 out.go:99] [download-only-791072] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:30:52.847472 1798775 notify.go:221] Checking for updates...
	I1216 02:30:52.851136 1798775 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:30:52.854736 1798775 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:30:52.857844 1798775 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:30:52.861092 1798775 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:30:52.864220 1798775 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 02:30:52.870170 1798775 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:30:52.870454 1798775 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:30:52.902189 1798775 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:30:52.902302 1798775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:30:52.962662 1798775 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 02:30:52.953072247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:30:52.962764 1798775 docker.go:319] overlay module found
	I1216 02:30:52.965734 1798775 out.go:99] Using the docker driver based on user configuration
	I1216 02:30:52.965788 1798775 start.go:309] selected driver: docker
	I1216 02:30:52.965796 1798775 start.go:927] validating driver "docker" against <nil>
	I1216 02:30:52.965912 1798775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:30:53.027375 1798775 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 02:30:53.017530484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:30:53.027532 1798775 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:30:53.027815 1798775 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 02:30:53.027994 1798775 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:30:53.031142 1798775 out.go:171] Using Docker driver with root privileges
	I1216 02:30:53.034020 1798775 cni.go:84] Creating CNI manager for ""
	I1216 02:30:53.034085 1798775 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1216 02:30:53.034101 1798775 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 02:30:53.034187 1798775 start.go:353] cluster config:
	{Name:download-only-791072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-791072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:30:53.037060 1798775 out.go:99] Starting "download-only-791072" primary control-plane node in "download-only-791072" cluster
	I1216 02:30:53.037091 1798775 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1216 02:30:53.040127 1798775 out.go:99] Pulling base image v0.0.48-1765575274-22117 ...
	I1216 02:30:53.040184 1798775 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:30:53.040377 1798775 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 02:30:53.057047 1798775 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb to local cache
	I1216 02:30:53.057212 1798775 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory
	I1216 02:30:53.057239 1798775 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory, skipping pull
	I1216 02:30:53.057250 1798775 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in cache, skipping pull
	I1216 02:30:53.057260 1798775 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb as a tarball
	I1216 02:30:53.096994 1798775 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1216 02:30:53.097030 1798775 cache.go:65] Caching tarball of preloaded images
	I1216 02:30:53.097218 1798775 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1216 02:30:53.100251 1798775 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1216 02:30:53.100290 1798775 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1216 02:30:53.182145 1798775 preload.go:295] Got checksum from GCS API "4ead9b9dbba82a7ecb06a001f1ffeaf3"
	I1216 02:30:53.182218 1798775 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:4ead9b9dbba82a7ecb06a001f1ffeaf3 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-791072 host does not exist
	  To start a cluster, run: "minikube start -p download-only-791072"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-791072
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 02:31:04.469105 1798370 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-635078 --alsologtostderr --binary-mirror http://127.0.0.1:38209 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-635078" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-635078
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-870019
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-870019: exit status 85 (84.783889ms)

                                                
                                                
-- stdout --
	* Profile "addons-870019" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-870019"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-870019
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-870019: exit status 85 (83.506208ms)

                                                
                                                
-- stdout --
	* Profile "addons-870019" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-870019"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (165.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-870019 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-870019 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m45.792186584s)
--- PASS: TestAddons/Setup (165.79s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.75s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 56.422713ms
addons_test.go:878: volcano-admission stabilized in 57.321761ms
addons_test.go:870: volcano-scheduler stabilized in 57.449914ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-c2sqt" [b3437f1e-3144-4ac0-939c-74a47cbb400b] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003464913s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-lr98d" [6ed53e4f-73b0-4be4-b29b-cc32ffa579e6] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003749113s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-fsvmw" [5fca8886-3454-46ec-ac13-abf4f9ac4851] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003605167s
addons_test.go:905: (dbg) Run:  kubectl --context addons-870019 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-870019 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-870019 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [d5fe9c79-c66d-4fd7-ad91-144418ab9503] Pending
helpers_test.go:353: "test-job-nginx-0" [d5fe9c79-c66d-4fd7-ad91-144418ab9503] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [d5fe9c79-c66d-4fd7-ad91-144418ab9503] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.004307142s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-870019 addons disable volcano --alsologtostderr -v=1: (12.027253125s)
--- PASS: TestAddons/serial/Volcano (39.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-870019 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-870019 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-870019 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-870019 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1283d5ed-27a0-44e4-b080-718bd0d03679] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1283d5ed-27a0-44e4-b080-718bd0d03679] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003627069s
addons_test.go:696: (dbg) Run:  kubectl --context addons-870019 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-870019 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-870019 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-870019 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 5.345646ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-kznpt" [04a8c3a3-ab6a-4c94-98a0-60518242b5ad] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003698384s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-hb2hz" [5551aff3-51bd-40dc-a5f0-01675ee72895] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003728201s
addons_test.go:394: (dbg) Run:  kubectl --context addons-870019 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-870019 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-870019 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.50603236s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 ip
2025/12/16 02:35:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.55s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.150458ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-870019
addons_test.go:334: (dbg) Run:  kubectl --context addons-870019 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-870019 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-870019 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-870019 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [6f8f33cb-4f6d-4b4d-8caa-fac1fd443aa0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [6f8f33cb-4f6d-4b4d-8caa-fac1fd443aa0] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.002998632s
I1216 02:36:25.042809 1798370 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-870019 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-870019 addons disable ingress-dns --alsologtostderr -v=1: (1.318540797s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-870019 addons disable ingress --alsologtostderr -v=1: (8.008616518s)
--- PASS: TestAddons/parallel/Ingress (18.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-d8kcr" [1ff44a79-15fe-43c5-8bb8-42449eb14065] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004035377s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-870019 addons disable inspektor-gadget --alsologtostderr -v=1: (5.799580585s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 23.091673ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-7z6qb" [ea36637c-902c-47ef-aab3-a78debcfbb02] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00330219s
addons_test.go:465: (dbg) Run:  kubectl --context addons-870019 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 02:35:32.899865 1798370 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 02:35:32.903967 1798370 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 02:35:32.904003 1798370 kapi.go:107] duration metric: took 7.669227ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.680368ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-870019 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-870019 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [e73a7469-56d1-4e83-8123-7bdd007984ad] Pending
helpers_test.go:353: "task-pv-pod" [e73a7469-56d1-4e83-8123-7bdd007984ad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [e73a7469-56d1-4e83-8123-7bdd007984ad] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003667941s
addons_test.go:574: (dbg) Run:  kubectl --context addons-870019 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-870019 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-870019 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-870019 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-870019 delete pod task-pv-pod: (1.409287245s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-870019 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-870019 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-870019 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [70fa6f88-9859-439e-84a6-1f7c2f9b7258] Pending
helpers_test.go:353: "task-pv-pod-restore" [70fa6f88-9859-439e-84a6-1f7c2f9b7258] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [70fa6f88-9859-439e-84a6-1f7c2f9b7258] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004410434s
addons_test.go:616: (dbg) Run:  kubectl --context addons-870019 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-870019 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-870019 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-870019 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.920187798s)
--- PASS: TestAddons/parallel/CSI (63.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-870019 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-870019 --alsologtostderr -v=1: (1.623745518s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-cz7sc" [035e6aa0-39a2-49b5-b4a7-6402fe204e87] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-cz7sc" [035e6aa0-39a2-49b5-b4a7-6402fe204e87] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003273621s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-870019 addons disable headlamp --alsologtostderr -v=1: (5.800102041s)
--- PASS: TestAddons/parallel/Headlamp (17.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-rgmf5" [9126d7a2-de2a-46cf-b561-46bd32f67ff5] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003788018s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-870019 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-870019 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-870019 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [bb15ba63-8e39-4157-8a74-0df8f97e583f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [bb15ba63-8e39-4157-8a74-0df8f97e583f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [bb15ba63-8e39-4157-8a74-0df8f97e583f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003985293s
addons_test.go:969: (dbg) Run:  kubectl --context addons-870019 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 ssh "cat /opt/local-path-provisioner/pvc-3706f5f4-17ab-4adc-8a49-d0260098f24c_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-870019 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-870019 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-870019 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.276422944s)
--- PASS: TestAddons/parallel/LocalPath (52.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-6x2nt" [5897f671-db41-45c3-8042-cf657ff1a25b] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005001024s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-v4xjh" [8bdc22a8-04b4-433f-a8db-46de6acabf2e] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00339114s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-870019 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-870019 addons disable yakd --alsologtostderr -v=1: (5.851944s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-870019
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-870019: (12.077988176s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-870019
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-870019
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-870019
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (43.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-059203 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-059203 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (40.128380025s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-059203 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-059203 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-059203 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-059203" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-059203
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-059203: (2.572415713s)
--- PASS: TestCertOptions (43.64s)

                                                
                                    
x
+
TestCertExpiration (222.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-774320 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-774320 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (31.805554262s)
E1216 03:50:51.132000 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:52:14.214581 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-774320 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-774320 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.419769302s)
helpers_test.go:176: Cleaning up "cert-expiration-774320" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-774320
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-774320: (2.788386011s)
--- PASS: TestCertExpiration (222.02s)

                                                
                                    
x
+
TestForceSystemdFlag (32.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-670301 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1216 03:48:44.847242 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:48:51.133399 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-670301 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.78121932s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-670301 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-670301" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-670301
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-670301: (2.066735603s)
--- PASS: TestForceSystemdFlag (32.16s)

                                                
                                    
x
+
TestForceSystemdEnv (37.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-986662 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-986662 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.978750476s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-986662 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-986662" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-986662
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-986662: (2.090061667s)
--- PASS: TestForceSystemdEnv (37.37s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.38s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-474886 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-474886 --driver=docker  --container-runtime=containerd: (31.048394803s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-474886"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-474886": (1.105867838s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hn98OY21BGHW/agent.1818137" SSH_AGENT_PID="1818138" DOCKER_HOST=ssh://docker@127.0.0.1:34339 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hn98OY21BGHW/agent.1818137" SSH_AGENT_PID="1818138" DOCKER_HOST=ssh://docker@127.0.0.1:34339 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hn98OY21BGHW/agent.1818137" SSH_AGENT_PID="1818138" DOCKER_HOST=ssh://docker@127.0.0.1:34339 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.286575968s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hn98OY21BGHW/agent.1818137" SSH_AGENT_PID="1818138" DOCKER_HOST=ssh://docker@127.0.0.1:34339 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-474886" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-474886
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-474886: (2.040972615s)
--- PASS: TestDockerEnvContainerd (46.38s)

                                                
                                    
x
+
TestErrorSpam/setup (30.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-393004 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-393004 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-393004 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-393004 --driver=docker  --container-runtime=containerd: (30.745142637s)
--- PASS: TestErrorSpam/setup (30.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 pause
--- PASS: TestErrorSpam/pause (1.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 stop: (1.417832946s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-393004 --log_dir /tmp/nospam-393004 stop
--- PASS: TestErrorSpam/stop (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-853651 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1216 02:38:51.141855 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:51.148392 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:51.159790 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:51.181284 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:51.222731 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:51.303976 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:51.465453 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:51.786797 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:52.428289 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:53.709676 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:56.271649 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:39:01.393549 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:39:11.635008 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:39:32.116716 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-853651 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m15.96733385s)
--- PASS: TestFunctional/serial/StartWithProxy (75.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 02:39:39.897538 1798370 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-853651 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-853651 --alsologtostderr -v=8: (7.311352812s)
functional_test.go:678: soft start took 7.315027785s for "functional-853651" cluster.
I1216 02:39:47.209312 1798370 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-853651 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 cache add registry.k8s.io/pause:3.1: (1.32679386s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 cache add registry.k8s.io/pause:3.3: (1.203483755s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 cache add registry.k8s.io/pause:latest: (1.139701942s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-853651 /tmp/TestFunctionalserialCacheCmdcacheadd_local3243284189/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cache add minikube-local-cache-test:functional-853651
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cache delete minikube-local-cache-test:functional-853651
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-853651
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.567204ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 kubectl -- --context functional-853651 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-853651 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-853651 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 02:40:13.079223 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-853651 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.52858177s)
functional_test.go:776: restart took 47.528699035s for "functional-853651" cluster.
I1216 02:40:42.627626 1798370 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (47.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-853651 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 logs: (1.468137094s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 logs --file /tmp/TestFunctionalserialLogsFileCmd1974199736/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 logs --file /tmp/TestFunctionalserialLogsFileCmd1974199736/001/logs.txt: (1.682495844s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-853651 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-853651
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-853651: exit status 115 (375.776836ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31672 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-853651 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 config get cpus: exit status 14 (63.148653ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 config get cpus: exit status 14 (93.598823ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-853651 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-853651 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1835201: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (240.055455ms)

                                                
                                                
-- stdout --
	* [functional-853651] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:41:25.843230 1834614 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:41:25.843346 1834614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:41:25.843387 1834614 out.go:374] Setting ErrFile to fd 2...
	I1216 02:41:25.843393 1834614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:41:25.843659 1834614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:41:25.844010 1834614 out.go:368] Setting JSON to false
	I1216 02:41:25.844961 1834614 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30230,"bootTime":1765822656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:41:25.845035 1834614 start.go:143] virtualization:  
	I1216 02:41:25.849215 1834614 out.go:179] * [functional-853651] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 02:41:25.852432 1834614 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:41:25.852484 1834614 notify.go:221] Checking for updates...
	I1216 02:41:25.857122 1834614 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:41:25.863467 1834614 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:41:25.866396 1834614 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:41:25.869222 1834614 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:41:25.872150 1834614 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:41:25.875514 1834614 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 02:41:25.876080 1834614 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:41:25.908182 1834614 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:41:25.908302 1834614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:41:25.983855 1834614 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 02:41:25.969968874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:41:25.983956 1834614 docker.go:319] overlay module found
	I1216 02:41:25.987629 1834614 out.go:179] * Using the docker driver based on existing profile
	I1216 02:41:25.990549 1834614 start.go:309] selected driver: docker
	I1216 02:41:25.990573 1834614 start.go:927] validating driver "docker" against &{Name:functional-853651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-853651 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:41:25.990667 1834614 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:41:25.994302 1834614 out.go:203] 
	W1216 02:41:25.997168 1834614 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 02:41:26.001945 1834614 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-853651 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (215.968204ms)

                                                
                                                
-- stdout --
	* [functional-853651] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:41:27.395444 1835029 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:41:27.395635 1835029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:41:27.395666 1835029 out.go:374] Setting ErrFile to fd 2...
	I1216 02:41:27.395689 1835029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:41:27.396713 1835029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 02:41:27.397162 1835029 out.go:368] Setting JSON to false
	I1216 02:41:27.398105 1835029 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30232,"bootTime":1765822656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 02:41:27.398212 1835029 start.go:143] virtualization:  
	I1216 02:41:27.401713 1835029 out.go:179] * [functional-853651] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1216 02:41:27.404740 1835029 notify.go:221] Checking for updates...
	I1216 02:41:27.408423 1835029 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:41:27.411522 1835029 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:41:27.414485 1835029 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 02:41:27.417332 1835029 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 02:41:27.420245 1835029 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 02:41:27.423248 1835029 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:41:27.426778 1835029 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 02:41:27.427481 1835029 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:41:27.471403 1835029 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 02:41:27.471559 1835029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 02:41:27.533656 1835029 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 02:41:27.524171019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 02:41:27.533762 1835029 docker.go:319] overlay module found
	I1216 02:41:27.536852 1835029 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 02:41:27.539770 1835029 start.go:309] selected driver: docker
	I1216 02:41:27.539795 1835029 start.go:927] validating driver "docker" against &{Name:functional-853651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-853651 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:41:27.539899 1835029 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:41:27.543380 1835029 out.go:203] 
	W1216 02:41:27.546267 1835029 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 02:41:27.549172 1835029 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-853651 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-853651 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-tcrrh" [9c48ae3f-06a3-4548-ae6f-c106dc39f972] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-tcrrh" [9c48ae3f-06a3-4548-ae6f-c106dc39f972] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.019998988s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32618
functional_test.go:1680: http://192.168.49.2:32618: success! body:
Request served by hello-node-connect-7d85dfc575-tcrrh

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32618
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [111532e8-7e4b-4c8c-8f62-442b377137b8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003515739s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-853651 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-853651 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-853651 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-853651 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8f800b8f-e4c3-423f-9b33-c28f4c362d5c] Pending
helpers_test.go:353: "sp-pod" [8f800b8f-e4c3-423f-9b33-c28f4c362d5c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [8f800b8f-e4c3-423f-9b33-c28f4c362d5c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002793517s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-853651 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-853651 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-853651 delete -f testdata/storage-provisioner/pod.yaml: (1.106337538s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-853651 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ffc531a9-2cbe-43a9-8fb8-3f37f8f5ddcc] Pending
helpers_test.go:353: "sp-pod" [ffc531a9-2cbe-43a9-8fb8-3f37f8f5ddcc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004253194s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-853651 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.29s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh -n functional-853651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cp functional-853651:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1191954672/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh -n functional-853651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh -n functional-853651 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1798370/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo cat /etc/test/nested/copy/1798370/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1798370.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo cat /etc/ssl/certs/1798370.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1798370.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo cat /usr/share/ca-certificates/1798370.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/17983702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo cat /etc/ssl/certs/17983702.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/17983702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo cat /usr/share/ca-certificates/17983702.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-853651 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 ssh "sudo systemctl is-active docker": exit status 1 (479.330024ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 ssh "sudo systemctl is-active crio": exit status 1 (430.196161ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-853651 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-853651 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-853651 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1829962: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-853651 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 version -o=json --components
2025/12/16 02:41:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 version -o=json --components: (1.39101793s)
--- PASS: TestFunctional/parallel/Version/components (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-853651 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-853651 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [a4399859-56fc-4034-8e2f-198ab83d0c74] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [a4399859-56fc-4034-8e2f-198ab83d0c74] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003726641s
I1216 02:40:59.538198 1798370 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-853651 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-853651
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-853651
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-853651 image ls --format short --alsologtostderr:
I1216 02:41:34.813227 1835906 out.go:360] Setting OutFile to fd 1 ...
I1216 02:41:34.813589 1835906 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:34.813635 1835906 out.go:374] Setting ErrFile to fd 2...
I1216 02:41:34.813763 1835906 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:34.814080 1835906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 02:41:34.814792 1835906 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:34.814948 1835906 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:34.815669 1835906 cli_runner.go:164] Run: docker container inspect functional-853651 --format={{.State.Status}}
I1216 02:41:34.836215 1835906 ssh_runner.go:195] Run: systemctl --version
I1216 02:41:34.836269 1835906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853651
I1216 02:41:34.854482 1835906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34349 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-853651/id_rsa Username:docker}
I1216 02:41:34.952572 1835906 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-853651 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-853651  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ docker.io/library/minikube-local-cache-test │ functional-853651  │ sha256:106be4 │ 992B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:10afed │ 23MB   │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-853651 image ls --format table --alsologtostderr:
I1216 02:41:35.556837 1836088 out.go:360] Setting OutFile to fd 1 ...
I1216 02:41:35.557236 1836088 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:35.557525 1836088 out.go:374] Setting ErrFile to fd 2...
I1216 02:41:35.557556 1836088 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:35.557875 1836088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 02:41:35.559552 1836088 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:35.559766 1836088 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:35.560420 1836088 cli_runner.go:164] Run: docker container inspect functional-853651 --format={{.State.Status}}
I1216 02:41:35.578453 1836088 ssh_runner.go:195] Run: systemctl --version
I1216 02:41:35.578507 1836088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853651
I1216 02:41:35.596597 1836088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34349 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-853651/id_rsa Username:docker}
I1216 02:41:35.702629 1836088 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-853651 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["regist
ry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha25
6:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-853651","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:106be42faa4dd4147343df978553d005570a8f7212a8499769ed94b75df65cdd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-853651"],"size":"992"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9
dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22985759"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c466046447204
5276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-853651 image ls --format json --alsologtostderr:
I1216 02:41:35.274388 1836008 out.go:360] Setting OutFile to fd 1 ...
I1216 02:41:35.274496 1836008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:35.274501 1836008 out.go:374] Setting ErrFile to fd 2...
I1216 02:41:35.274506 1836008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:35.274807 1836008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 02:41:35.282449 1836008 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:35.283887 1836008 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:35.284737 1836008 cli_runner.go:164] Run: docker container inspect functional-853651 --format={{.State.Status}}
I1216 02:41:35.307337 1836008 ssh_runner.go:195] Run: systemctl --version
I1216 02:41:35.307389 1836008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853651
I1216 02:41:35.335988 1836008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34349 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-853651/id_rsa Username:docker}
I1216 02:41:35.442269 1836008 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls --format yaml --alsologtostderr
E1216 02:41:35.002131 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-853651 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-853651
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:106be42faa4dd4147343df978553d005570a8f7212a8499769ed94b75df65cdd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-853651
size: "992"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-853651 image ls --format yaml --alsologtostderr:
I1216 02:41:34.966391 1835932 out.go:360] Setting OutFile to fd 1 ...
I1216 02:41:34.966504 1835932 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:34.966513 1835932 out.go:374] Setting ErrFile to fd 2...
I1216 02:41:34.966517 1835932 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:34.966863 1835932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 02:41:34.967779 1835932 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:34.967903 1835932 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:34.968442 1835932 cli_runner.go:164] Run: docker container inspect functional-853651 --format={{.State.Status}}
I1216 02:41:34.988994 1835932 ssh_runner.go:195] Run: systemctl --version
I1216 02:41:34.989055 1835932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853651
I1216 02:41:35.011474 1835932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34349 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-853651/id_rsa Username:docker}
I1216 02:41:35.129660 1835932 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 ssh pgrep buildkitd: exit status 1 (370.579533ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr: (3.449879832s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr:
I1216 02:41:35.456932 1836065 out.go:360] Setting OutFile to fd 1 ...
I1216 02:41:35.458267 1836065 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:35.458317 1836065 out.go:374] Setting ErrFile to fd 2...
I1216 02:41:35.458343 1836065 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:35.458635 1836065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 02:41:35.459377 1836065 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:35.462682 1836065 config.go:182] Loaded profile config "functional-853651": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1216 02:41:35.463377 1836065 cli_runner.go:164] Run: docker container inspect functional-853651 --format={{.State.Status}}
I1216 02:41:35.490428 1836065 ssh_runner.go:195] Run: systemctl --version
I1216 02:41:35.490480 1836065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-853651
I1216 02:41:35.512772 1836065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34349 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-853651/id_rsa Username:docker}
I1216 02:41:35.615144 1836065 build_images.go:162] Building image from path: /tmp/build.2071185050.tar
I1216 02:41:35.615229 1836065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 02:41:35.627703 1836065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2071185050.tar
I1216 02:41:35.636973 1836065 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2071185050.tar: stat -c "%s %y" /var/lib/minikube/build/build.2071185050.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2071185050.tar': No such file or directory
I1216 02:41:35.637010 1836065 ssh_runner.go:362] scp /tmp/build.2071185050.tar --> /var/lib/minikube/build/build.2071185050.tar (3072 bytes)
I1216 02:41:35.661256 1836065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2071185050
I1216 02:41:35.670499 1836065 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2071185050 -xf /var/lib/minikube/build/build.2071185050.tar
I1216 02:41:35.679882 1836065 containerd.go:394] Building image: /var/lib/minikube/build/build.2071185050
I1216 02:41:35.679956 1836065 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2071185050 --local dockerfile=/var/lib/minikube/build/build.2071185050 --output type=image,name=localhost/my-image:functional-853651
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:5f7906426c201ca0c458a470e9b76727f028a5c8c273d6b29f4051637030c181
#8 exporting manifest sha256:5f7906426c201ca0c458a470e9b76727f028a5c8c273d6b29f4051637030c181 0.0s done
#8 exporting config sha256:c81061fcadffe142f770aa8f93f49d6c9e6f0172d95a5cb6692b47b67dc28d38 0.0s done
#8 naming to localhost/my-image:functional-853651 done
#8 DONE 0.2s
I1216 02:41:38.823636 1836065 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2071185050 --local dockerfile=/var/lib/minikube/build/build.2071185050 --output type=image,name=localhost/my-image:functional-853651: (3.143654643s)
I1216 02:41:38.823715 1836065 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2071185050
I1216 02:41:38.832442 1836065 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2071185050.tar
I1216 02:41:38.840847 1836065 build_images.go:218] Built localhost/my-image:functional-853651 from /tmp/build.2071185050.tar
I1216 02:41:38.840880 1836065 build_images.go:134] succeeded building to: functional-853651
I1216 02:41:38.840885 1836065 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-853651
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image load --daemon kicbase/echo-server:functional-853651 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 image load --daemon kicbase/echo-server:functional-853651 --alsologtostderr: (1.061362924s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image load --daemon kicbase/echo-server:functional-853651 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-853651
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image load --daemon kicbase/echo-server:functional-853651 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image save kicbase/echo-server:functional-853651 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image rm kicbase/echo-server:functional-853651 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-853651
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 image save --daemon kicbase/echo-server:functional-853651 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-853651
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-853651 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.132.70 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-853651 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdany-port2054387469/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765852861068977570" to /tmp/TestFunctionalparallelMountCmdany-port2054387469/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765852861068977570" to /tmp/TestFunctionalparallelMountCmdany-port2054387469/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765852861068977570" to /tmp/TestFunctionalparallelMountCmdany-port2054387469/001/test-1765852861068977570
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (480.638981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:41:01.551355 1798370 retry.go:31] will retry after 429.655513ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 02:41 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 02:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 02:41 test-1765852861068977570
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh cat /mount-9p/test-1765852861068977570
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-853651 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [db040fa2-335d-4170-9331-33fd61e08da0] Pending
helpers_test.go:353: "busybox-mount" [db040fa2-335d-4170-9331-33fd61e08da0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [db040fa2-335d-4170-9331-33fd61e08da0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [db040fa2-335d-4170-9331-33fd61e08da0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003130419s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-853651 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdany-port2054387469/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdspecific-port236906755/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (421.412975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:41:09.819738 1798370 retry.go:31] will retry after 507.409271ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdspecific-port236906755/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 ssh "sudo umount -f /mount-9p": exit status 1 (302.857253ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-853651 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdspecific-port236906755/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdVerifyCleanup954592019/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdVerifyCleanup954592019/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdVerifyCleanup954592019/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T" /mount1: exit status 1 (456.333519ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:41:11.851765 1798370 retry.go:31] will retry after 328.704556ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-853651 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdVerifyCleanup954592019/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdVerifyCleanup954592019/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-853651 /tmp/TestFunctionalparallelMountCmdVerifyCleanup954592019/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-853651 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-853651 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-8g6hv" [cd86fa2a-640f-4e0d-8732-e9a52b1813de] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-8g6hv" [cd86fa2a-640f-4e0d-8732-e9a52b1813de] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004563523s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "363.597479ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "61.445695ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "353.241219ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.063054ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 service list: (1.448813479s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-853651 service list -o json: (1.342170493s)
functional_test.go:1504: Took "1.342253249s" to run "out/minikube-linux-arm64 -p functional-853651 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32312
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-853651 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32312
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-853651
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-853651
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-853651
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-389759 cache add registry.k8s.io/pause:3.1: (1.125788146s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-389759 cache add registry.k8s.io/pause:3.3: (1.153606713s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-389759 cache add registry.k8s.io/pause:latest: (1.0919683s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2162603004/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cache add minikube-local-cache-test:functional-389759
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cache delete minikube-local-cache-test:functional-389759
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-389759
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (313.428436ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3127281515/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-389759 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3127281515/001/logs.txt: (1.031316044s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 config get cpus: exit status 14 (104.434485ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 config get cpus: exit status 14 (89.370147ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (198.127899ms)

                                                
                                                
-- stdout --
	* [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:11:08.588484 1865729 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:11:08.588733 1865729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.588746 1865729 out.go:374] Setting ErrFile to fd 2...
	I1216 03:11:08.588752 1865729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.589098 1865729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:11:08.589545 1865729 out.go:368] Setting JSON to false
	I1216 03:11:08.590513 1865729 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":32013,"bootTime":1765822656,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:11:08.590589 1865729 start.go:143] virtualization:  
	I1216 03:11:08.594003 1865729 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 03:11:08.596970 1865729 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:11:08.597047 1865729 notify.go:221] Checking for updates...
	I1216 03:11:08.603143 1865729 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:11:08.606270 1865729 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:11:08.609228 1865729 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:11:08.612063 1865729 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:11:08.615111 1865729 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:11:08.618556 1865729 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 03:11:08.619379 1865729 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:11:08.642227 1865729 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 03:11:08.642364 1865729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:11:08.707755 1865729 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:08.696314079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:11:08.707907 1865729 docker.go:319] overlay module found
	I1216 03:11:08.714149 1865729 out.go:179] * Using the docker driver based on existing profile
	I1216 03:11:08.717185 1865729 start.go:309] selected driver: docker
	I1216 03:11:08.717203 1865729 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:11:08.717297 1865729 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:11:08.720703 1865729 out.go:203] 
	W1216 03:11:08.727546 1865729 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 03:11:08.730574 1865729 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389759 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389759 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (198.793082ms)

                                                
                                                
-- stdout --
	* [functional-389759] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:11:08.391329 1865682 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:11:08.391539 1865682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.391552 1865682 out.go:374] Setting ErrFile to fd 2...
	I1216 03:11:08.391558 1865682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:08.391932 1865682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:11:08.392357 1865682 out.go:368] Setting JSON to false
	I1216 03:11:08.393198 1865682 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":32013,"bootTime":1765822656,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:11:08.393267 1865682 start.go:143] virtualization:  
	I1216 03:11:08.396723 1865682 out.go:179] * [functional-389759] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1216 03:11:08.400535 1865682 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:11:08.400690 1865682 notify.go:221] Checking for updates...
	I1216 03:11:08.406387 1865682 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:11:08.409319 1865682 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:11:08.412187 1865682 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:11:08.415104 1865682 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:11:08.417972 1865682 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:11:08.421361 1865682 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 03:11:08.421945 1865682 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:11:08.449775 1865682 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 03:11:08.449949 1865682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:11:08.516485 1865682 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:11:08.506822282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:11:08.516592 1865682 docker.go:319] overlay module found
	I1216 03:11:08.519808 1865682 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 03:11:08.522679 1865682 start.go:309] selected driver: docker
	I1216 03:11:08.522712 1865682 start.go:927] validating driver "docker" against &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:11:08.522827 1865682 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:11:08.526546 1865682 out.go:203] 
	W1216 03:11:08.529668 1865682 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 03:11:08.532776 1865682 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh -n functional-389759 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cp functional-389759:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1312419370/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh -n functional-389759 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh -n functional-389759 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1798370/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo cat /etc/test/nested/copy/1798370/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1798370.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo cat /etc/ssl/certs/1798370.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1798370.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo cat /usr/share/ca-certificates/1798370.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/17983702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo cat /etc/ssl/certs/17983702.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/17983702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo cat /usr/share/ca-certificates/17983702.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh "sudo systemctl is-active docker": exit status 1 (294.239357ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh "sudo systemctl is-active crio": exit status 1 (269.508396ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-389759 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "364.002101ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.596931ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "334.132217ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "63.620506ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2071099183/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (316.576677ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 03:11:01.816478 1798370 retry.go:31] will retry after 330.050254ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2071099183/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh "sudo umount -f /mount-9p": exit status 1 (265.406869ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-389759 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2071099183/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T" /mount1: exit status 1 (602.891895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 03:11:03.754878 1798370 retry.go:31] will retry after 687.401382ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-389759 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389759 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3771733156/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389759 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-389759
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-389759
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389759 image ls --format short --alsologtostderr:
I1216 03:11:21.200806 1867907 out.go:360] Setting OutFile to fd 1 ...
I1216 03:11:21.201007 1867907 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:21.201034 1867907 out.go:374] Setting ErrFile to fd 2...
I1216 03:11:21.201053 1867907 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:21.201742 1867907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 03:11:21.202436 1867907 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:21.202613 1867907 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:21.203187 1867907 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 03:11:21.221567 1867907 ssh_runner.go:195] Run: systemctl --version
I1216 03:11:21.221620 1867907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 03:11:21.238199 1867907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 03:11:21.333721 1867907 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389759 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-389759  │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ localhost/my-image                          │ functional-389759  │ sha256:f6b905 │ 831kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-389759  │ sha256:106be4 │ 992B   │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389759 image ls --format table --alsologtostderr:
I1216 03:11:25.308770 1868345 out.go:360] Setting OutFile to fd 1 ...
I1216 03:11:25.308893 1868345 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:25.308904 1868345 out.go:374] Setting ErrFile to fd 2...
I1216 03:11:25.308909 1868345 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:25.309266 1868345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 03:11:25.310304 1868345 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:25.310537 1868345 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:25.311505 1868345 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 03:11:25.328935 1868345 ssh_runner.go:195] Run: systemctl --version
I1216 03:11:25.329007 1868345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 03:11:25.347364 1868345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 03:11:25.446014 1868345 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389759 image ls --format json --alsologtostderr:
[{"id":"sha256:f6b90553d47705cc6fc14f052bdd590c7f5e52677cc9aef378193ce6df39e196","repoDigests":[],"repoTags":["localhost/my-image:functional-389759"],"size":"830614"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"siz
e":"71300"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-389759"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":
["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:106be42faa4dd4147343df978553d005570a8f7212a8499769ed94b75df65cdd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-389759"],"size":"992"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-s
cheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389759 image ls --format json --alsologtostderr:
I1216 03:11:25.083192 1868304 out.go:360] Setting OutFile to fd 1 ...
I1216 03:11:25.083369 1868304 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:25.083398 1868304 out.go:374] Setting ErrFile to fd 2...
I1216 03:11:25.083418 1868304 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:25.083702 1868304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 03:11:25.084406 1868304 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:25.084580 1868304 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:25.085129 1868304 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 03:11:25.102859 1868304 ssh_runner.go:195] Run: systemctl --version
I1216 03:11:25.102918 1868304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 03:11:25.120145 1868304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 03:11:25.220530 1868304 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389759 image ls --format yaml --alsologtostderr:
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:f6b90553d47705cc6fc14f052bdd590c7f5e52677cc9aef378193ce6df39e196
repoDigests: []
repoTags:
- localhost/my-image:functional-389759
size: "830614"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-389759
size: "2173567"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:106be42faa4dd4147343df978553d005570a8f7212a8499769ed94b75df65cdd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-389759
size: "992"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389759 image ls --format yaml --alsologtostderr:
I1216 03:11:24.856012 1868267 out.go:360] Setting OutFile to fd 1 ...
I1216 03:11:24.856276 1868267 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:24.856311 1868267 out.go:374] Setting ErrFile to fd 2...
I1216 03:11:24.856342 1868267 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:24.856661 1868267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 03:11:24.857528 1868267 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:24.857755 1868267 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:24.858475 1868267 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 03:11:24.876162 1868267 ssh_runner.go:195] Run: systemctl --version
I1216 03:11:24.876225 1868267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 03:11:24.893527 1868267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 03:11:24.991354 1868267 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389759 ssh pgrep buildkitd: exit status 1 (255.871891ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image build -t localhost/my-image:functional-389759 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-389759 image build -t localhost/my-image:functional-389759 testdata/build --alsologtostderr: (2.775309943s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389759 image build -t localhost/my-image:functional-389759 testdata/build --alsologtostderr:
I1216 03:11:21.864595 1868052 out.go:360] Setting OutFile to fd 1 ...
I1216 03:11:21.864779 1868052 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:21.864810 1868052 out.go:374] Setting ErrFile to fd 2...
I1216 03:11:21.864835 1868052 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 03:11:21.865095 1868052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 03:11:21.865836 1868052 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:21.866505 1868052 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 03:11:21.867125 1868052 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 03:11:21.885095 1868052 ssh_runner.go:195] Run: systemctl --version
I1216 03:11:21.885155 1868052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 03:11:21.903861 1868052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 03:11:22.001885 1868052 build_images.go:162] Building image from path: /tmp/build.2113774681.tar
I1216 03:11:22.001965 1868052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 03:11:22.013152 1868052 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2113774681.tar
I1216 03:11:22.017312 1868052 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2113774681.tar: stat -c "%s %y" /var/lib/minikube/build/build.2113774681.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2113774681.tar': No such file or directory
I1216 03:11:22.017353 1868052 ssh_runner.go:362] scp /tmp/build.2113774681.tar --> /var/lib/minikube/build/build.2113774681.tar (3072 bytes)
I1216 03:11:22.036732 1868052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2113774681
I1216 03:11:22.045203 1868052 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2113774681 -xf /var/lib/minikube/build/build.2113774681.tar
I1216 03:11:22.053751 1868052 containerd.go:394] Building image: /var/lib/minikube/build/build.2113774681
I1216 03:11:22.053855 1868052 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2113774681 --local dockerfile=/var/lib/minikube/build/build.2113774681 --output type=image,name=localhost/my-image:functional-389759
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:15dbecb2c6bc0d49eb10897d8d5773740ad35ed43a84f298fb0f13d3aede209f
#8 exporting manifest sha256:15dbecb2c6bc0d49eb10897d8d5773740ad35ed43a84f298fb0f13d3aede209f 0.0s done
#8 exporting config sha256:f6b90553d47705cc6fc14f052bdd590c7f5e52677cc9aef378193ce6df39e196 0.0s done
#8 naming to localhost/my-image:functional-389759 done
#8 DONE 0.2s
I1216 03:11:24.552699 1868052 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2113774681 --local dockerfile=/var/lib/minikube/build/build.2113774681 --output type=image,name=localhost/my-image:functional-389759: (2.498810853s)
I1216 03:11:24.552817 1868052 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2113774681
I1216 03:11:24.560971 1868052 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2113774681.tar
I1216 03:11:24.568790 1868052 build_images.go:218] Built localhost/my-image:functional-389759 from /tmp/build.2113774681.tar
I1216 03:11:24.568820 1868052 build_images.go:134] succeeded building to: functional-389759
I1216 03:11:24.568825 1868052 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-389759
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image load --daemon kicbase/echo-server:functional-389759 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image load --daemon kicbase/echo-server:functional-389759 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-389759
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image load --daemon kicbase/echo-server:functional-389759 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image save kicbase/echo-server:functional-389759 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image rm kicbase/echo-server:functional-389759 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-389759
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 image save --daemon kicbase/echo-server:functional-389759 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-389759
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-389759 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-389759
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-389759
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-389759
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1216 03:13:44.847457 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:44.853783 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:44.865178 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:44.886608 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:44.928039 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:45.019769 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:45.185522 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:45.507866 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:46.149874 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:47.431247 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:49.993994 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:51.133915 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:55.115744 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:14:05.357332 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:14:25.839250 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:15:06.800708 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m58.49559186s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (179.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- rollout status deployment/busybox
E1216 03:15:51.131179 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 kubectl -- rollout status deployment/busybox: (4.798059575s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-d2s2f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-rfjjk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-zbfgv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-d2s2f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-rfjjk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-zbfgv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-d2s2f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-rfjjk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-zbfgv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-d2s2f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-d2s2f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-rfjjk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-rfjjk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-zbfgv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 kubectl -- exec busybox-7b57f96db7-zbfgv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 node add --alsologtostderr -v 5
E1216 03:16:28.722851 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 node add --alsologtostderr -v 5: (58.671091559s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5: (1.066039816s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-789453 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.048527507s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp testdata/cp-test.txt ha-789453:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2648876500/001/cp-test_ha-789453.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453:/home/docker/cp-test.txt ha-789453-m02:/home/docker/cp-test_ha-789453_ha-789453-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m02 "sudo cat /home/docker/cp-test_ha-789453_ha-789453-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453:/home/docker/cp-test.txt ha-789453-m03:/home/docker/cp-test_ha-789453_ha-789453-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m03 "sudo cat /home/docker/cp-test_ha-789453_ha-789453-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453:/home/docker/cp-test.txt ha-789453-m04:/home/docker/cp-test_ha-789453_ha-789453-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m04 "sudo cat /home/docker/cp-test_ha-789453_ha-789453-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp testdata/cp-test.txt ha-789453-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2648876500/001/cp-test_ha-789453-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m02:/home/docker/cp-test.txt ha-789453:/home/docker/cp-test_ha-789453-m02_ha-789453.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453 "sudo cat /home/docker/cp-test_ha-789453-m02_ha-789453.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m02:/home/docker/cp-test.txt ha-789453-m03:/home/docker/cp-test_ha-789453-m02_ha-789453-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m03 "sudo cat /home/docker/cp-test_ha-789453-m02_ha-789453-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m02:/home/docker/cp-test.txt ha-789453-m04:/home/docker/cp-test_ha-789453-m02_ha-789453-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m04 "sudo cat /home/docker/cp-test_ha-789453-m02_ha-789453-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp testdata/cp-test.txt ha-789453-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2648876500/001/cp-test_ha-789453-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m03:/home/docker/cp-test.txt ha-789453:/home/docker/cp-test_ha-789453-m03_ha-789453.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453 "sudo cat /home/docker/cp-test_ha-789453-m03_ha-789453.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m03:/home/docker/cp-test.txt ha-789453-m02:/home/docker/cp-test_ha-789453-m03_ha-789453-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m02 "sudo cat /home/docker/cp-test_ha-789453-m03_ha-789453-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m03:/home/docker/cp-test.txt ha-789453-m04:/home/docker/cp-test_ha-789453-m03_ha-789453-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m04 "sudo cat /home/docker/cp-test_ha-789453-m03_ha-789453-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp testdata/cp-test.txt ha-789453-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2648876500/001/cp-test_ha-789453-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m04:/home/docker/cp-test.txt ha-789453:/home/docker/cp-test_ha-789453-m04_ha-789453.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453 "sudo cat /home/docker/cp-test_ha-789453-m04_ha-789453.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m04:/home/docker/cp-test.txt ha-789453-m02:/home/docker/cp-test_ha-789453-m04_ha-789453-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m02 "sudo cat /home/docker/cp-test_ha-789453-m04_ha-789453-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 cp ha-789453-m04:/home/docker/cp-test.txt ha-789453-m03:/home/docker/cp-test_ha-789453-m04_ha-789453-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 ssh -n ha-789453-m03 "sudo cat /home/docker/cp-test_ha-789453-m04_ha-789453-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 node stop m02 --alsologtostderr -v 5: (12.240316891s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5: exit status 7 (787.634887ms)

                                                
                                                
-- stdout --
	ha-789453
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-789453-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789453-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-789453-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:17:32.244281 1885643 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:17:32.244461 1885643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:17:32.244492 1885643 out.go:374] Setting ErrFile to fd 2...
	I1216 03:17:32.244514 1885643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:17:32.244891 1885643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:17:32.245187 1885643 out.go:368] Setting JSON to false
	I1216 03:17:32.245246 1885643 mustload.go:66] Loading cluster: ha-789453
	I1216 03:17:32.247344 1885643 config.go:182] Loaded profile config "ha-789453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 03:17:32.247418 1885643 status.go:174] checking status of ha-789453 ...
	I1216 03:17:32.247734 1885643 notify.go:221] Checking for updates...
	I1216 03:17:32.248618 1885643 cli_runner.go:164] Run: docker container inspect ha-789453 --format={{.State.Status}}
	I1216 03:17:32.270010 1885643 status.go:371] ha-789453 host status = "Running" (err=<nil>)
	I1216 03:17:32.270035 1885643 host.go:66] Checking if "ha-789453" exists ...
	I1216 03:17:32.270331 1885643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-789453
	I1216 03:17:32.306629 1885643 host.go:66] Checking if "ha-789453" exists ...
	I1216 03:17:32.306946 1885643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:17:32.306998 1885643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-789453
	I1216 03:17:32.326699 1885643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/ha-789453/id_rsa Username:docker}
	I1216 03:17:32.424827 1885643 ssh_runner.go:195] Run: systemctl --version
	I1216 03:17:32.431609 1885643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:17:32.445589 1885643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:17:32.506017 1885643 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-16 03:17:32.495784913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:17:32.506765 1885643 kubeconfig.go:125] found "ha-789453" server: "https://192.168.49.254:8443"
	I1216 03:17:32.506800 1885643 api_server.go:166] Checking apiserver status ...
	I1216 03:17:32.506853 1885643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:17:32.519920 1885643 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	I1216 03:17:32.528324 1885643 api_server.go:182] apiserver freezer: "6:freezer:/docker/35f8e5d1f3b031e6c29ef8ef8f0930e01677178287a7123e8a083f8b0ced69ad/kubepods/burstable/pod60b73934ad9c43e6caf698374f055247/310dcaab65c09c4d0bfdbcc5e0bff6f121c8f20cacfcf283beb398e2002de967"
	I1216 03:17:32.528407 1885643 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/35f8e5d1f3b031e6c29ef8ef8f0930e01677178287a7123e8a083f8b0ced69ad/kubepods/burstable/pod60b73934ad9c43e6caf698374f055247/310dcaab65c09c4d0bfdbcc5e0bff6f121c8f20cacfcf283beb398e2002de967/freezer.state
	I1216 03:17:32.536169 1885643 api_server.go:204] freezer state: "THAWED"
	I1216 03:17:32.536199 1885643 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 03:17:32.544910 1885643 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 03:17:32.544936 1885643 status.go:463] ha-789453 apiserver status = Running (err=<nil>)
	I1216 03:17:32.544947 1885643 status.go:176] ha-789453 status: &{Name:ha-789453 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:17:32.544963 1885643 status.go:174] checking status of ha-789453-m02 ...
	I1216 03:17:32.545274 1885643 cli_runner.go:164] Run: docker container inspect ha-789453-m02 --format={{.State.Status}}
	I1216 03:17:32.562939 1885643 status.go:371] ha-789453-m02 host status = "Stopped" (err=<nil>)
	I1216 03:17:32.562961 1885643 status.go:384] host is not running, skipping remaining checks
	I1216 03:17:32.562969 1885643 status.go:176] ha-789453-m02 status: &{Name:ha-789453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:17:32.562990 1885643 status.go:174] checking status of ha-789453-m03 ...
	I1216 03:17:32.563386 1885643 cli_runner.go:164] Run: docker container inspect ha-789453-m03 --format={{.State.Status}}
	I1216 03:17:32.585662 1885643 status.go:371] ha-789453-m03 host status = "Running" (err=<nil>)
	I1216 03:17:32.585686 1885643 host.go:66] Checking if "ha-789453-m03" exists ...
	I1216 03:17:32.586012 1885643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-789453-m03
	I1216 03:17:32.603885 1885643 host.go:66] Checking if "ha-789453-m03" exists ...
	I1216 03:17:32.604202 1885643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:17:32.604247 1885643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-789453-m03
	I1216 03:17:32.621627 1885643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/ha-789453-m03/id_rsa Username:docker}
	I1216 03:17:32.721120 1885643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:17:32.734723 1885643 kubeconfig.go:125] found "ha-789453" server: "https://192.168.49.254:8443"
	I1216 03:17:32.734754 1885643 api_server.go:166] Checking apiserver status ...
	I1216 03:17:32.734804 1885643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:17:32.748205 1885643 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1446/cgroup
	I1216 03:17:32.756789 1885643 api_server.go:182] apiserver freezer: "6:freezer:/docker/43a721d37294181e8bcfb01aef6655be7d9f89967ebe342719fcff4fd7ed9316/kubepods/burstable/pod47af31917b37d17665b121bd66a96b5e/b595f85fde59bb8628cda25a0e41357b24f2f767c0b754eaa98670c9f0e03592"
	I1216 03:17:32.756857 1885643 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/43a721d37294181e8bcfb01aef6655be7d9f89967ebe342719fcff4fd7ed9316/kubepods/burstable/pod47af31917b37d17665b121bd66a96b5e/b595f85fde59bb8628cda25a0e41357b24f2f767c0b754eaa98670c9f0e03592/freezer.state
	I1216 03:17:32.765089 1885643 api_server.go:204] freezer state: "THAWED"
	I1216 03:17:32.765121 1885643 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 03:17:32.774320 1885643 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 03:17:32.774416 1885643 status.go:463] ha-789453-m03 apiserver status = Running (err=<nil>)
	I1216 03:17:32.774451 1885643 status.go:176] ha-789453-m03 status: &{Name:ha-789453-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:17:32.774506 1885643 status.go:174] checking status of ha-789453-m04 ...
	I1216 03:17:32.774977 1885643 cli_runner.go:164] Run: docker container inspect ha-789453-m04 --format={{.State.Status}}
	I1216 03:17:32.798322 1885643 status.go:371] ha-789453-m04 host status = "Running" (err=<nil>)
	I1216 03:17:32.798363 1885643 host.go:66] Checking if "ha-789453-m04" exists ...
	I1216 03:17:32.798728 1885643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-789453-m04
	I1216 03:17:32.818710 1885643 host.go:66] Checking if "ha-789453-m04" exists ...
	I1216 03:17:32.819173 1885643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:17:32.819254 1885643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-789453-m04
	I1216 03:17:32.842413 1885643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34374 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/ha-789453-m04/id_rsa Username:docker}
	I1216 03:17:32.941502 1885643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:17:32.960110 1885643 status.go:176] ha-789453-m04 status: &{Name:ha-789453-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 node start m02 --alsologtostderr -v 5: (12.880511618s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5: (1.439991817s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.090757177s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 stop --alsologtostderr -v 5: (37.719747852s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 start --wait true --alsologtostderr -v 5
E1216 03:18:44.846587 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:18:51.133839 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:18:54.209534 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:19:12.564160 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 start --wait true --alsologtostderr -v 5: (1m2.673350178s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 node delete m03 --alsologtostderr -v 5: (10.158003052s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 stop --alsologtostderr -v 5: (36.390405301s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5: exit status 7 (125.639321ms)

                                                
                                                
-- stdout --
	ha-789453
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789453-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789453-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:20:18.255024 1900470 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:20:18.255199 1900470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:20:18.255233 1900470 out.go:374] Setting ErrFile to fd 2...
	I1216 03:20:18.255245 1900470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:20:18.255618 1900470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:20:18.256233 1900470 out.go:368] Setting JSON to false
	I1216 03:20:18.256310 1900470 mustload.go:66] Loading cluster: ha-789453
	I1216 03:20:18.256622 1900470 notify.go:221] Checking for updates...
	I1216 03:20:18.256823 1900470 config.go:182] Loaded profile config "ha-789453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 03:20:18.256867 1900470 status.go:174] checking status of ha-789453 ...
	I1216 03:20:18.257717 1900470 cli_runner.go:164] Run: docker container inspect ha-789453 --format={{.State.Status}}
	I1216 03:20:18.276940 1900470 status.go:371] ha-789453 host status = "Stopped" (err=<nil>)
	I1216 03:20:18.276961 1900470 status.go:384] host is not running, skipping remaining checks
	I1216 03:20:18.276968 1900470 status.go:176] ha-789453 status: &{Name:ha-789453 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:20:18.277000 1900470 status.go:174] checking status of ha-789453-m02 ...
	I1216 03:20:18.277340 1900470 cli_runner.go:164] Run: docker container inspect ha-789453-m02 --format={{.State.Status}}
	I1216 03:20:18.304680 1900470 status.go:371] ha-789453-m02 host status = "Stopped" (err=<nil>)
	I1216 03:20:18.304704 1900470 status.go:384] host is not running, skipping remaining checks
	I1216 03:20:18.304712 1900470 status.go:176] ha-789453-m02 status: &{Name:ha-789453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:20:18.304733 1900470 status.go:174] checking status of ha-789453-m04 ...
	I1216 03:20:18.305053 1900470 cli_runner.go:164] Run: docker container inspect ha-789453-m04 --format={{.State.Status}}
	I1216 03:20:18.326096 1900470 status.go:371] ha-789453-m04 host status = "Stopped" (err=<nil>)
	I1216 03:20:18.326126 1900470 status.go:384] host is not running, skipping remaining checks
	I1216 03:20:18.326133 1900470 status.go:176] ha-789453-m04 status: &{Name:ha-789453-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1216 03:20:51.130790 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.111788612s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (54.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 node add --control-plane --alsologtostderr -v 5: (53.08036093s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-789453 status --alsologtostderr -v 5: (1.098163433s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (54.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.031478824s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-907739 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-907739 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m21.809724887s)
--- PASS: TestJSONOutput/start/Command (81.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.89s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-907739 --output=json --user=testUser
E1216 03:23:44.847286 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestJSONOutput/pause/Command (0.89s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-907739 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-907739 --output=json --user=testUser
E1216 03:23:51.134223 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-907739 --output=json --user=testUser: (6.013116453s)
--- PASS: TestJSONOutput/stop/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-564543 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-564543 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.823453ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05a10057-4722-44d6-854d-0f7cb3814624","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-564543] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"15e7016f-0164-4b44-a25d-0ac1f0482676","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22158"}}
	{"specversion":"1.0","id":"7f69abcf-a831-4126-9d25-de61fe33473e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"84f2b88b-dfba-4ecf-8c04-e3dff7243174","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig"}}
	{"specversion":"1.0","id":"1186d90f-608a-4b61-9090-fdb14e17af12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube"}}
	{"specversion":"1.0","id":"7f70d516-f749-4156-bbd6-7626669b506d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"99858eb7-d9ba-4dd6-a736-d857d8cb3ea4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c7db8aa8-6269-4da7-a0fc-4f36a35db3be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-564543" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-564543
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-222205 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-222205 --network=: (38.381913405s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-222205" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-222205
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-222205: (2.166561219s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.58s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-507766 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-507766 --network=bridge: (36.138642099s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-507766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-507766
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-507766: (2.066763525s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.23s)

                                                
                                    
x
+
TestKicExistingNetwork (34.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 03:25:16.555813 1798370 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 03:25:16.569981 1798370 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 03:25:16.570066 1798370 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 03:25:16.570088 1798370 cli_runner.go:164] Run: docker network inspect existing-network
W1216 03:25:16.586063 1798370 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 03:25:16.586101 1798370 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 03:25:16.586116 1798370 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 03:25:16.586216 1798370 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 03:25:16.603503 1798370 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dec5f3d28f85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:96:6a:8b:2d:78} reservation:<nil>}
I1216 03:25:16.603811 1798370 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400041ac40}
I1216 03:25:16.603833 1798370 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 03:25:16.603883 1798370 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 03:25:16.672034 1798370 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-056793 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-056793 --network=existing-network: (32.66237872s)
helpers_test.go:176: Cleaning up "existing-network-056793" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-056793
E1216 03:25:51.130803 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-056793: (2.08875177s)
I1216 03:25:51.440228 1798370 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.90s)

                                                
                                    
x
+
TestKicCustomSubnet (35.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-350446 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-350446 --subnet=192.168.60.0/24: (33.2054668s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-350446 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-350446" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-350446
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-350446: (2.213269965s)
--- PASS: TestKicCustomSubnet (35.44s)

                                                
                                    
x
+
TestKicStaticIP (34.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-762310 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-762310 --static-ip=192.168.200.200: (32.186716487s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-762310 ip
helpers_test.go:176: Cleaning up "static-ip-762310" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-762310
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-762310: (2.235773278s)
--- PASS: TestKicStaticIP (34.59s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-968542 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-968542 --driver=docker  --container-runtime=containerd: (29.330181577s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-971151 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-971151 --driver=docker  --container-runtime=containerd: (34.548945883s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-968542
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-971151
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-971151" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-971151
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-971151: (2.051214395s)
helpers_test.go:176: Cleaning up "first-968542" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-968542
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-968542: (2.380947845s)
--- PASS: TestMinikubeProfile (69.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-999287 --memory=3072 --mount-string /tmp/TestMountStartserial826903095/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-999287 --memory=3072 --mount-string /tmp/TestMountStartserial826903095/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.349821866s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-999287 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-001478 --memory=3072 --mount-string /tmp/TestMountStartserial826903095/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-001478 --memory=3072 --mount-string /tmp/TestMountStartserial826903095/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.40872554s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-001478 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-999287 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-999287 --alsologtostderr -v=5: (1.716122056s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-001478 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-001478
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-001478: (1.295460972s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-001478
E1216 03:28:34.217467 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-001478: (6.5436188s)
--- PASS: TestMountStart/serial/RestartStopped (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-001478 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-418051 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1216 03:28:44.847223 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:28:51.133605 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:30:07.926269 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-418051 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m48.869342474s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-418051 -- rollout status deployment/busybox: (3.36810354s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-4pbhq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-jzqwz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-4pbhq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-jzqwz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-4pbhq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-jzqwz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-4pbhq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-4pbhq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-jzqwz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-418051 -- exec busybox-7b57f96db7-jzqwz -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-418051 -v=5 --alsologtostderr
E1216 03:30:51.130757 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-418051 -v=5 --alsologtostderr: (27.066407533s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-418051 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp testdata/cp-test.txt multinode-418051:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4104916022/001/cp-test_multinode-418051.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051:/home/docker/cp-test.txt multinode-418051-m02:/home/docker/cp-test_multinode-418051_multinode-418051-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m02 "sudo cat /home/docker/cp-test_multinode-418051_multinode-418051-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051:/home/docker/cp-test.txt multinode-418051-m03:/home/docker/cp-test_multinode-418051_multinode-418051-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m03 "sudo cat /home/docker/cp-test_multinode-418051_multinode-418051-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp testdata/cp-test.txt multinode-418051-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4104916022/001/cp-test_multinode-418051-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051-m02:/home/docker/cp-test.txt multinode-418051:/home/docker/cp-test_multinode-418051-m02_multinode-418051.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051 "sudo cat /home/docker/cp-test_multinode-418051-m02_multinode-418051.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051-m02:/home/docker/cp-test.txt multinode-418051-m03:/home/docker/cp-test_multinode-418051-m02_multinode-418051-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m03 "sudo cat /home/docker/cp-test_multinode-418051-m02_multinode-418051-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp testdata/cp-test.txt multinode-418051-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4104916022/001/cp-test_multinode-418051-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051-m03:/home/docker/cp-test.txt multinode-418051:/home/docker/cp-test_multinode-418051-m03_multinode-418051.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051 "sudo cat /home/docker/cp-test_multinode-418051-m03_multinode-418051.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 cp multinode-418051-m03:/home/docker/cp-test.txt multinode-418051-m02:/home/docker/cp-test_multinode-418051-m03_multinode-418051-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 ssh -n multinode-418051-m02 "sudo cat /home/docker/cp-test_multinode-418051-m03_multinode-418051-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-418051 node stop m03: (1.312598225s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-418051 status: exit status 7 (569.570795ms)

                                                
                                                
-- stdout --
	multinode-418051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-418051-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-418051-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-418051 status --alsologtostderr: exit status 7 (535.777758ms)

                                                
                                                
-- stdout --
	multinode-418051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-418051-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-418051-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:31:18.297453 1953839 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:31:18.297813 1953839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:31:18.297821 1953839 out.go:374] Setting ErrFile to fd 2...
	I1216 03:31:18.297826 1953839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:31:18.298245 1953839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:31:18.298480 1953839 out.go:368] Setting JSON to false
	I1216 03:31:18.298499 1953839 mustload.go:66] Loading cluster: multinode-418051
	I1216 03:31:18.300496 1953839 config.go:182] Loaded profile config "multinode-418051": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 03:31:18.300584 1953839 status.go:174] checking status of multinode-418051 ...
	I1216 03:31:18.300907 1953839 notify.go:221] Checking for updates...
	I1216 03:31:18.304637 1953839 cli_runner.go:164] Run: docker container inspect multinode-418051 --format={{.State.Status}}
	I1216 03:31:18.330500 1953839 status.go:371] multinode-418051 host status = "Running" (err=<nil>)
	I1216 03:31:18.330524 1953839 host.go:66] Checking if "multinode-418051" exists ...
	I1216 03:31:18.330829 1953839 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-418051
	I1216 03:31:18.354680 1953839 host.go:66] Checking if "multinode-418051" exists ...
	I1216 03:31:18.354983 1953839 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:31:18.355028 1953839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-418051
	I1216 03:31:18.373138 1953839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34479 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/multinode-418051/id_rsa Username:docker}
	I1216 03:31:18.468640 1953839 ssh_runner.go:195] Run: systemctl --version
	I1216 03:31:18.477083 1953839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:31:18.490924 1953839 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:31:18.550942 1953839 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-16 03:31:18.541290627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:31:18.551555 1953839 kubeconfig.go:125] found "multinode-418051" server: "https://192.168.67.2:8443"
	I1216 03:31:18.551597 1953839 api_server.go:166] Checking apiserver status ...
	I1216 03:31:18.551647 1953839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:31:18.564223 1953839 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1381/cgroup
	I1216 03:31:18.572655 1953839 api_server.go:182] apiserver freezer: "6:freezer:/docker/65d7d6b502ce69e8950e8d37517cf994d30181f1ebeddbb93980aae02557a8cb/kubepods/burstable/podd78e675a39f66dc90d5b3b4c8d3a2c87/e0a647914842358dab3c1842c8e63b03eb71e7f6450884991e0fac776d636154"
	I1216 03:31:18.572743 1953839 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/65d7d6b502ce69e8950e8d37517cf994d30181f1ebeddbb93980aae02557a8cb/kubepods/burstable/podd78e675a39f66dc90d5b3b4c8d3a2c87/e0a647914842358dab3c1842c8e63b03eb71e7f6450884991e0fac776d636154/freezer.state
	I1216 03:31:18.580610 1953839 api_server.go:204] freezer state: "THAWED"
	I1216 03:31:18.580643 1953839 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1216 03:31:18.588851 1953839 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1216 03:31:18.588882 1953839 status.go:463] multinode-418051 apiserver status = Running (err=<nil>)
	I1216 03:31:18.588892 1953839 status.go:176] multinode-418051 status: &{Name:multinode-418051 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:31:18.588908 1953839 status.go:174] checking status of multinode-418051-m02 ...
	I1216 03:31:18.589216 1953839 cli_runner.go:164] Run: docker container inspect multinode-418051-m02 --format={{.State.Status}}
	I1216 03:31:18.606321 1953839 status.go:371] multinode-418051-m02 host status = "Running" (err=<nil>)
	I1216 03:31:18.606346 1953839 host.go:66] Checking if "multinode-418051-m02" exists ...
	I1216 03:31:18.606666 1953839 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-418051-m02
	I1216 03:31:18.624273 1953839 host.go:66] Checking if "multinode-418051-m02" exists ...
	I1216 03:31:18.624620 1953839 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:31:18.624670 1953839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-418051-m02
	I1216 03:31:18.642900 1953839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34484 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/multinode-418051-m02/id_rsa Username:docker}
	I1216 03:31:18.736939 1953839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:31:18.750535 1953839 status.go:176] multinode-418051-m02 status: &{Name:multinode-418051-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:31:18.750570 1953839 status.go:174] checking status of multinode-418051-m03 ...
	I1216 03:31:18.750884 1953839 cli_runner.go:164] Run: docker container inspect multinode-418051-m03 --format={{.State.Status}}
	I1216 03:31:18.769385 1953839 status.go:371] multinode-418051-m03 host status = "Stopped" (err=<nil>)
	I1216 03:31:18.769406 1953839 status.go:384] host is not running, skipping remaining checks
	I1216 03:31:18.769413 1953839 status.go:176] multinode-418051-m03 status: &{Name:multinode-418051-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-418051 node start m03 -v=5 --alsologtostderr: (7.415453396s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-418051
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-418051
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-418051: (25.275550868s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-418051 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-418051 --wait=true -v=5 --alsologtostderr: (52.584829619s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-418051
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-418051 node delete m03: (5.23155563s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-418051 stop: (23.895937132s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-418051 status: exit status 7 (98.179152ms)

                                                
                                                
-- stdout --
	multinode-418051
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-418051-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-418051 status --alsologtostderr: exit status 7 (90.382456ms)

                                                
                                                
-- stdout --
	multinode-418051
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-418051-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:33:14.957881 1962630 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:33:14.957997 1962630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:14.958009 1962630 out.go:374] Setting ErrFile to fd 2...
	I1216 03:33:14.958014 1962630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:33:14.958285 1962630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:33:14.958474 1962630 out.go:368] Setting JSON to false
	I1216 03:33:14.958507 1962630 mustload.go:66] Loading cluster: multinode-418051
	I1216 03:33:14.958917 1962630 config.go:182] Loaded profile config "multinode-418051": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 03:33:14.958941 1962630 status.go:174] checking status of multinode-418051 ...
	I1216 03:33:14.959502 1962630 cli_runner.go:164] Run: docker container inspect multinode-418051 --format={{.State.Status}}
	I1216 03:33:14.959757 1962630 notify.go:221] Checking for updates...
	I1216 03:33:14.977573 1962630 status.go:371] multinode-418051 host status = "Stopped" (err=<nil>)
	I1216 03:33:14.977599 1962630 status.go:384] host is not running, skipping remaining checks
	I1216 03:33:14.977606 1962630 status.go:176] multinode-418051 status: &{Name:multinode-418051 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:33:14.977637 1962630 status.go:174] checking status of multinode-418051-m02 ...
	I1216 03:33:14.977952 1962630 cli_runner.go:164] Run: docker container inspect multinode-418051-m02 --format={{.State.Status}}
	I1216 03:33:14.995843 1962630 status.go:371] multinode-418051-m02 host status = "Stopped" (err=<nil>)
	I1216 03:33:14.995863 1962630 status.go:384] host is not running, skipping remaining checks
	I1216 03:33:14.995879 1962630 status.go:176] multinode-418051-m02 status: &{Name:multinode-418051-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-418051 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1216 03:33:44.847123 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:33:51.134235 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-418051 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.902501301s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-418051 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-418051
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-418051-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-418051-m02 --driver=docker  --container-runtime=containerd: exit status 14 (100.004286ms)

                                                
                                                
-- stdout --
	* [multinode-418051-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-418051-m02' is duplicated with machine name 'multinode-418051-m02' in profile 'multinode-418051'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-418051-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-418051-m03 --driver=docker  --container-runtime=containerd: (33.267892603s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-418051
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-418051: exit status 80 (353.057808ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-418051 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-418051-m03 already exists in multinode-418051-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-418051-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-418051-m03: (2.116166477s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.89s)

                                                
                                    
x
+
TestPreload (121.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-477674 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1216 03:35:34.211223 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-477674 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (1m1.675196566s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-477674 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-477674 image pull gcr.io/k8s-minikube/busybox: (2.11538207s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-477674
E1216 03:35:51.130736 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-477674: (5.920215275s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-477674 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-477674 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (49.34289212s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-477674 image list
helpers_test.go:176: Cleaning up "test-preload-477674" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-477674
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-477674: (2.466795954s)
--- PASS: TestPreload (121.92s)

                                                
                                    
x
+
TestScheduledStopUnix (108.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-015055 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-015055 --memory=3072 --driver=docker  --container-runtime=containerd: (31.891582022s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-015055 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 03:37:19.796094 1978524 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:37:19.796217 1978524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:37:19.796235 1978524 out.go:374] Setting ErrFile to fd 2...
	I1216 03:37:19.796240 1978524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:37:19.796484 1978524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:37:19.796764 1978524 out.go:368] Setting JSON to false
	I1216 03:37:19.796873 1978524 mustload.go:66] Loading cluster: scheduled-stop-015055
	I1216 03:37:19.797220 1978524 config.go:182] Loaded profile config "scheduled-stop-015055": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 03:37:19.797290 1978524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/config.json ...
	I1216 03:37:19.797461 1978524 mustload.go:66] Loading cluster: scheduled-stop-015055
	I1216 03:37:19.797579 1978524 config.go:182] Loaded profile config "scheduled-stop-015055": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-015055 -n scheduled-stop-015055
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-015055 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 03:37:20.287322 1978615 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:37:20.287545 1978615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:37:20.287571 1978615 out.go:374] Setting ErrFile to fd 2...
	I1216 03:37:20.287588 1978615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:37:20.287866 1978615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:37:20.288159 1978615 out.go:368] Setting JSON to false
	I1216 03:37:20.288399 1978615 daemonize_unix.go:73] killing process 1978540 as it is an old scheduled stop
	I1216 03:37:20.288519 1978615 mustload.go:66] Loading cluster: scheduled-stop-015055
	I1216 03:37:20.288930 1978615 config.go:182] Loaded profile config "scheduled-stop-015055": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 03:37:20.289046 1978615 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/config.json ...
	I1216 03:37:20.289254 1978615 mustload.go:66] Loading cluster: scheduled-stop-015055
	I1216 03:37:20.289409 1978615 config.go:182] Loaded profile config "scheduled-stop-015055": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1216 03:37:20.297128 1798370 retry.go:31] will retry after 57.279µs: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.298291 1798370 retry.go:31] will retry after 187.501µs: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.299472 1798370 retry.go:31] will retry after 185.294µs: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.300622 1798370 retry.go:31] will retry after 454.807µs: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.301794 1798370 retry.go:31] will retry after 361.327µs: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.302949 1798370 retry.go:31] will retry after 856.243µs: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.304126 1798370 retry.go:31] will retry after 781.754µs: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.305271 1798370 retry.go:31] will retry after 1.796427ms: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.307526 1798370 retry.go:31] will retry after 1.502338ms: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.309779 1798370 retry.go:31] will retry after 4.014332ms: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.314042 1798370 retry.go:31] will retry after 5.5932ms: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.320318 1798370 retry.go:31] will retry after 11.056176ms: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.331502 1798370 retry.go:31] will retry after 14.332332ms: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.346729 1798370 retry.go:31] will retry after 23.060742ms: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
I1216 03:37:20.369913 1798370 retry.go:31] will retry after 28.434243ms: open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-015055 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-015055 -n scheduled-stop-015055
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-015055
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-015055 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 03:37:46.227420 1979294 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:37:46.227638 1979294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:37:46.227670 1979294 out.go:374] Setting ErrFile to fd 2...
	I1216 03:37:46.227690 1979294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:37:46.227979 1979294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:37:46.228280 1979294 out.go:368] Setting JSON to false
	I1216 03:37:46.228413 1979294 mustload.go:66] Loading cluster: scheduled-stop-015055
	I1216 03:37:46.229107 1979294 config.go:182] Loaded profile config "scheduled-stop-015055": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1216 03:37:46.229568 1979294 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/scheduled-stop-015055/config.json ...
	I1216 03:37:46.229811 1979294 mustload.go:66] Loading cluster: scheduled-stop-015055
	I1216 03:37:46.229991 1979294 config.go:182] Loaded profile config "scheduled-stop-015055": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-015055
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-015055: exit status 7 (68.770277ms)

                                                
                                                
-- stdout --
	scheduled-stop-015055
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-015055 -n scheduled-stop-015055
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-015055 -n scheduled-stop-015055: exit status 7 (74.039515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-015055" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-015055
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-015055: (4.713497297s)
--- PASS: TestScheduledStopUnix (108.25s)

                                                
                                    
x
+
TestInsufficientStorage (12.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-093520 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1216 03:38:44.847094 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-093520 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.040243081s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"833ccac2-d464-44c1-8867-adb754a061c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-093520] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b902864-0a0d-492a-83d6-554545e491b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22158"}}
	{"specversion":"1.0","id":"d6cd8f77-3341-4a45-9224-f339cdc68bc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"87a8c24b-ae44-4715-9567-4368e718fc63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig"}}
	{"specversion":"1.0","id":"b943a3e7-0b0c-4fde-8811-85ae3b6984a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube"}}
	{"specversion":"1.0","id":"28041263-b203-4f95-a906-6660bbb97ca5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1e5b80c8-b6a5-43b7-8374-d18ab7dc27f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"87d8fabf-b9d0-4899-bdf6-3ad494004436","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"dc92169f-2a52-4d23-beee-cd9cb23e94e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ce3b059d-2a86-4a25-81fe-ca7dd931e2a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"77a6be95-35fa-4b1d-a1d4-9f636fdd9f09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ebfa753d-2d6f-4c02-9718-78ffdb11df5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-093520\" primary control-plane node in \"insufficient-storage-093520\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"487c630c-472b-4cc0-a12a-131a1068e0a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765575274-22117 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e26573e-5306-4bf4-8790-5d58374d4f42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"19b12efd-e7b4-4351-a635-167490192fd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-093520 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-093520 --output=json --layout=cluster: exit status 7 (300.859222ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-093520","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-093520","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 03:38:46.419282 1981115 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-093520" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-093520 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-093520 --output=json --layout=cluster: exit status 7 (299.827542ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-093520","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-093520","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 03:38:46.720704 1981182 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-093520" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
	E1216 03:38:46.730796 1981182 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/insufficient-storage-093520/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-093520" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-093520
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-093520: (1.9547874s)
--- PASS: TestInsufficientStorage (12.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3731681919 start -p running-upgrade-386980 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1216 03:46:47.928227 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3731681919 start -p running-upgrade-386980 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.846764267s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-386980 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-386980 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.372887328s)
helpers_test.go:176: Cleaning up "running-upgrade-386980" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-386980
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-386980: (2.378167144s)
--- PASS: TestRunningBinaryUpgrade (63.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.92s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2850472761 start -p missing-upgrade-725188 --memory=3072 --driver=docker  --container-runtime=containerd
E1216 03:38:51.133824 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2850472761 start -p missing-upgrade-725188 --memory=3072 --driver=docker  --container-runtime=containerd: (1m4.396067192s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-725188
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-725188
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-725188 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-725188 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m14.119645692s)
helpers_test.go:176: Cleaning up "missing-upgrade-725188" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-725188
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-725188: (2.718659121s)
--- PASS: TestMissingContainerUpgrade (143.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-777938 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-777938 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (92.636178ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-777938] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-777938 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-777938 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.107860325s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-777938 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-777938 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-777938 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.515789662s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-777938 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-777938 status -o json: exit status 2 (443.964857ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-777938","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-777938
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-777938: (2.318950264s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-777938 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-777938 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.568749563s)
--- PASS: TestNoKubernetes/serial/Start (7.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-777938 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-777938 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.048003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-777938
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-777938: (1.288906271s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-777938 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-777938 --driver=docker  --container-runtime=containerd: (6.748174598s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-777938 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-777938 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.689622ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (299.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.758836205 start -p stopped-upgrade-325412 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.758836205 start -p stopped-upgrade-325412 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (29.463394829s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.758836205 -p stopped-upgrade-325412 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.758836205 -p stopped-upgrade-325412 stop: (1.228876471s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-325412 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1216 03:43:44.846688 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:43:51.133423 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:45:14.219648 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:45:51.130815 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-325412 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m28.439410181s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (299.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-325412
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-325412: (2.120539852s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.12s)

                                                
                                    
x
+
TestPause/serial/Start (51.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-371764 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-371764 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (51.682374873s)
--- PASS: TestPause/serial/Start (51.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.28s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-371764 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-371764 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.262030928s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-371764 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-371764 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-371764 --output=json --layout=cluster: exit status 2 (334.480722ms)

                                                
                                                
-- stdout --
	{"Name":"pause-371764","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-371764","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-371764 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-371764 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-371764 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-371764 --alsologtostderr -v=5: (2.791283144s)
--- PASS: TestPause/serial/DeletePaused (2.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-371764
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-371764: exit status 1 (19.166856ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-371764: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-167684 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-167684 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (196.941507ms)

                                                
                                                
-- stdout --
	* [false-167684] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:49:03.774074 2032055 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:49:03.774251 2032055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:03.774262 2032055 out.go:374] Setting ErrFile to fd 2...
	I1216 03:49:03.774267 2032055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:49:03.774553 2032055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
	I1216 03:49:03.775013 2032055 out.go:368] Setting JSON to false
	I1216 03:49:03.775967 2032055 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34288,"bootTime":1765822656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 03:49:03.776042 2032055 start.go:143] virtualization:  
	I1216 03:49:03.779653 2032055 out.go:179] * [false-167684] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 03:49:03.783607 2032055 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:49:03.783707 2032055 notify.go:221] Checking for updates...
	I1216 03:49:03.789554 2032055 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:49:03.792559 2032055 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
	I1216 03:49:03.795413 2032055 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
	I1216 03:49:03.798399 2032055 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 03:49:03.801410 2032055 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:49:03.805171 2032055 config.go:182] Loaded profile config "kubernetes-upgrade-271074": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1216 03:49:03.805325 2032055 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:49:03.831818 2032055 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 03:49:03.831955 2032055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 03:49:03.889210 2032055 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 03:49:03.879855897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 03:49:03.889315 2032055 docker.go:319] overlay module found
	I1216 03:49:03.892435 2032055 out.go:179] * Using the docker driver based on user configuration
	I1216 03:49:03.895283 2032055 start.go:309] selected driver: docker
	I1216 03:49:03.895300 2032055 start.go:927] validating driver "docker" against <nil>
	I1216 03:49:03.895314 2032055 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:49:03.898846 2032055 out.go:203] 
	W1216 03:49:03.901842 2032055 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1216 03:49:03.904746 2032055 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-167684 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-167684" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:41:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-271074
contexts:
- context:
cluster: kubernetes-upgrade-271074
user: kubernetes-upgrade-271074
name: kubernetes-upgrade-271074
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-271074
user:
client-certificate: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.crt
client-key: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-167684

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167684"

                                                
                                                
----------------------- debugLogs end: false-167684 [took: 3.276327126s] --------------------------------
helpers_test.go:176: Cleaning up "false-167684" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-167684
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (69.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-580645 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1216 03:53:44.846388 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:53:51.133494 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-580645 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m9.008305629s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (69.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-580645 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [be59f585-f888-4601-b45b-ad46066170a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [be59f585-f888-4601-b45b-ad46066170a5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003357008s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-580645 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-580645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-580645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063386014s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-580645 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-580645 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-580645 --alsologtostderr -v=3: (12.105592503s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-580645 -n old-k8s-version-580645
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-580645 -n old-k8s-version-580645: exit status 7 (67.934037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-580645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (56.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-580645 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1216 03:55:51.130795 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-580645 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (56.346824799s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-580645 -n old-k8s-version-580645
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (56.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-krtw4" [e0355242-c252-4399-8fe9-54ceb40e0a59] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003453315s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-krtw4" [e0355242-c252-4399-8fe9-54ceb40e0a59] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004212314s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-580645 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-580645 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-580645 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-580645 -n old-k8s-version-580645
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-580645 -n old-k8s-version-580645: exit status 2 (318.176149ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-580645 -n old-k8s-version-580645
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-580645 -n old-k8s-version-580645: exit status 2 (332.323319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-580645 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-580645 -n old-k8s-version-580645
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-580645 -n old-k8s-version-580645
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (50.723714906s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-092028 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d05e44bf-e898-4709-8723-f7bd7c3bcf5a] Pending
helpers_test.go:353: "busybox" [d05e44bf-e898-4709-8723-f7bd7c3bcf5a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d05e44bf-e898-4709-8723-f7bd7c3bcf5a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003846335s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-092028 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-092028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-092028 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-092028 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-092028 --alsologtostderr -v=3: (12.086234127s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-092028 -n embed-certs-092028
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-092028 -n embed-certs-092028: exit status 7 (69.231773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-092028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-092028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.564013506s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-092028 -n embed-certs-092028
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qg8ln" [a03a4ea7-b13e-4fcd-9c1a-8c055a97e64a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003477178s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qg8ln" [a03a4ea7-b13e-4fcd-9c1a-8c055a97e64a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00407988s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-092028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-092028 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-092028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-092028 -n embed-certs-092028
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-092028 -n embed-certs-092028: exit status 2 (336.825904ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-092028 -n embed-certs-092028
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-092028 -n embed-certs-092028: exit status 2 (341.801526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-092028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-092028 -n embed-certs-092028
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-092028 -n embed-certs-092028
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1216 03:58:44.846642 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:58:51.134137 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:44.438726 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:44.445145 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:44.456555 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:44.477929 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:44.519341 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:44.600773 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:44.762890 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:45.085187 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:45.730341 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:47.012290 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:49.573657 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:59:54.695748 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:00:04.937302 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m21.526818232s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-862404 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b9cfaf5e-f44e-4679-a103-efe1c68fb080] Pending
helpers_test.go:353: "busybox" [b9cfaf5e-f44e-4679-a103-efe1c68fb080] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b9cfaf5e-f44e-4679-a103-efe1c68fb080] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004175679s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-862404 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-862404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035557938s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-862404 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-862404 --alsologtostderr -v=3
E1216 04:00:25.418758 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-862404 --alsologtostderr -v=3: (12.153931679s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404: exit status 7 (74.323293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-862404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1216 04:00:51.131208 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:01:06.380632 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/old-k8s-version-580645/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-862404 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (55.303736445s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-6jgmj" [e407e558-a1d3-4b8e-a44f-b937ffb05694] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003552004s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-6jgmj" [e407e558-a1d3-4b8e-a44f-b937ffb05694] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003388866s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-862404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-862404 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-862404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404: exit status 2 (347.750716ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404: exit status 2 (326.05689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-862404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-862404 -n default-k8s-diff-port-862404
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-255023 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-255023 --alsologtostderr -v=3: (1.313344593s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-255023 -n no-preload-255023: exit status 7 (79.480985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-255023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-450938 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-450938 --alsologtostderr -v=3: (1.309792031s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-450938 -n newest-cni-450938: exit status 7 (80.148653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-450938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-450938 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m18.636388889s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-167684 "pgrep -a kubelet"
I1216 04:19:34.944664 1798370 config.go:182] Loaded profile config "auto-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-167684 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9b7xv" [6f3bd273-54a9-4e8a-bbd3-9711fb626536] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9b7xv" [6f3bd273-54a9-4e8a-bbd3-9711fb626536] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003793263s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-167684 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (58.65564982s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-bxj7b" [075498b5-c062-4e44-9e3f-a84c975d17a4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003614783s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-167684 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-167684 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6p8zg" [2cf7908d-8e63-40f0-a752-da93c98f591f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6p8zg" [2cf7908d-8e63-40f0-a752-da93c98f591f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004336514s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-167684 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (57.010001119s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-wfbmj" [87253639-8df9-4fa8-9263-2f6fc899773b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-wfbmj" [87253639-8df9-4fa8-9263-2f6fc899773b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00386284s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-167684 "pgrep -a kubelet"
I1216 04:22:42.934787 1798370 config.go:182] Loaded profile config "calico-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-167684 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5xvs7" [72fba6da-2378-48d4-b2d3-f330e721de66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5xvs7" [72fba6da-2378-48d4-b2d3-f330e721de66] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003072702s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-167684 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (53.10246939s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-167684 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-167684 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mzrh5" [6fece64c-e85c-4c75-9237-90cf7be08215] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mzrh5" [6fece64c-e85c-4c75-9237-90cf7be08215] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.009443156s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m29.416411231s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-167684 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1216 04:24:45.489372 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:24:55.730761 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:05.534063 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/default-k8s-diff-port-862404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:16.212312 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:34.217432 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:46.430254 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:46.436690 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:46.448069 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:46.469583 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:46.511116 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:46.592837 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:46.754344 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:47.076131 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m14.189695594s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-ttkml" [5c00c54a-42ed-48fd-b437-d49ad2edb5a4] Running
E1216 04:25:47.718484 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:49.000368 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:51.131552 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:51.562130 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00380451s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-167684 "pgrep -a kubelet"
I1216 04:25:53.985442 1798370 config.go:182] Loaded profile config "kindnet-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-167684 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zbgsk" [f10c151c-3ae7-442b-b031-7c08b31def4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 04:25:56.683657 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/no-preload-255023/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:57.174192 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/auto-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-zbgsk" [f10c151c-3ae7-442b-b031-7c08b31def4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003544886s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-167684 "pgrep -a kubelet"
I1216 04:25:59.786641 1798370 config.go:182] Loaded profile config "bridge-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-167684 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-r57kv" [3ea85147-551a-40b4-932a-035c37a1a793] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 04:26:03.203745 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:26:03.210163 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:26:03.221576 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:26:03.243087 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:26:03.284966 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-r57kv" [3ea85147-551a-40b4-932a-035c37a1a793] Running
E1216 04:26:03.366776 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:26:03.528306 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:26:03.850033 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.018201183s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-167684 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1216 04:26:04.491654 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/flannel-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-167684 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-167684 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m12.818954004s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-167684 "pgrep -a kubelet"
I1216 04:27:42.856642 1798370 config.go:182] Loaded profile config "enable-default-cni-167684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-167684 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fvh47" [68e72fe2-6bf4-4521-9382-234b0f960bab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 04:27:46.825598 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/calico-167684/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-fvh47" [68e72fe2-6bf4-4521-9382-234b0f960bab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003610849s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-167684 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-167684 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.52
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.22
392 TestNetworkPlugins/group/kubenet 3.75
400 TestNetworkPlugins/group/cilium 3.95
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-646349 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-646349" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-646349
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-650877" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-650877
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-167684 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-167684" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:41:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-271074
contexts:
- context:
cluster: kubernetes-upgrade-271074
user: kubernetes-upgrade-271074
name: kubernetes-upgrade-271074
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-271074
user:
client-certificate: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.crt
client-key: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-167684

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167684"

                                                
                                                
----------------------- debugLogs end: kubenet-167684 [took: 3.573783882s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-167684" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-167684
--- SKIP: TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-167684 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-167684" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:41:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-271074
contexts:
- context:
cluster: kubernetes-upgrade-271074
user: kubernetes-upgrade-271074
name: kubernetes-upgrade-271074
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-271074
user:
client-certificate: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.crt
client-key: /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/kubernetes-upgrade-271074/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-167684

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-167684" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167684"

                                                
                                                
----------------------- debugLogs end: cilium-167684 [took: 3.769309181s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-167684" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-167684
--- SKIP: TestNetworkPlugins/group/cilium (3.95s)

                                                
                                    
Copied to clipboard